Don't trust Bing AI searches...

xmontrealer

(he/him/it)
May 23, 2005
11,459
9,344
113
It seems Bing AI scours the web in searches, and returns information and posts what it finds, whether real or not.

There are many bullshit posts, likely phishing scams, saying that low income Canadian seniors will receive $2200 as a special one time bonus on top of the usual CPP, OAS, and GIS payments they are to receive today.

A friend of mine in Winnipeg told me yesterday, all excited, that he heard about it from one of his friends, and that when he googled it an AI summary came up and confirmed the bonus payment.

I checked it out, and sure enough a simple search on my computer came up with the same result from Bing's AI, along with a list of website url's stating the same thing. None of them were from the CRA web site.

So I then I added the word "scam" to my search, and a totally different Bing AI result came up, including a link to the CRA benefits web page, that said there is no such thing, and both the CRA and Bing AI now stated that there are many mentions of the bonus $2200 on various web sites as being legit, but they are not.

Hey. let's be careful out there...
 

silentkisser

Master of Disaster
Jun 10, 2008
4,839
6,312
113
Anyone using AI should understand that it is NOT infallible. They get things wrong, and they can also have "hallucinations," where they make shit up. You need to double check their work to ensure accuracy. Keep in mind, the AI is only as good as the data its given. Garbage in, garbage out.
 

Maker17882

Well-known member
Jan 31, 2017
245
275
63
Anyone using AI should understand that it is NOT infallible. They get things wrong, and they can also have "hallucinations," where they make shit up. You need to double check their work to ensure accuracy. Keep in mind, the AI is only as good as the data its given. Garbage in, garbage out.
AI should be called NAI (Not Actual Intelligence) and absolutley NONE of the current ones (DeepSeek, Gemini, ChatGPT, Lamda, Anyshere, Anthropic, etc, etc) work well at all. I test them for work and my brain is batting 1000 v AI. All are consitently wrong and the much hailed RIP PPT is still not realized. They are getting better and will be there some day, could be soon, they are not there now period. Some AI tools that come with ERPs are getting better, SAP & Oracle have some nice reporting available (if the task is read a table and puke out a total and analysis theses tools are maturing quickly). For now I use AI but always have to proof read and correct. If I am in a hurry I just use my brain, faster and I know I do not need to validate results.

The AI Overlords are coming and I will welcome them. My guess is I will be well into retirement when this bubble pops and for sure it gonna (see: www pop, telcom pop, etc, etc).
 
  • Like
Reactions: silentkisser

silentkisser

Master of Disaster
Jun 10, 2008
4,839
6,312
113
AI should be called NAI (Not Actual Intelligence) and absolutley NONE of the current ones (DeepSeek, Gemini, ChatGPT, Lamda, Anyshere, Anthropic, etc, etc) work well at all. I test them for work and my brain is batting 1000 v AI. All are consitently wrong and the much hailed RIP PPT is still not realized. They are getting better and will be there some day, could be soon, they are not there now period. Some AI tools that come with ERPs are getting better, SAP & Oracle have some nice reporting available (if the task is read a table and puke out a total and analysis theses tools are maturing quickly). For now I use AI but always have to proof read and correct. If I am in a hurry I just use my brain, faster and I know I do not need to validate results.

The AI Overlords are coming and I will welcome them. My guess is I will be well into retirement when this bubble pops and for sure it gonna (see: www pop, telcom pop, etc, etc).
I have read that AI has proven itself superior (or at least equal to a human) in things like radiology, where it can detect things like cancer at an equal or higher rate than an actual doctor. I've read that drug companies are using AI to data mine results from clinical studies to help find patterns that might be obscure to the humans running the tests. It is getting better every day. But, it still fucks the pooch way to frequently. For me, I use it occasionally to maybe help with constructing an outline for a report, or as a starting point to write something in a format I'm unfamiliar with. But, it still requires a human to ensure things are accurate. I still think back to a lawyer about three years ago who asked AI to find case law to help him win a motion in court. Unfortunately, it hallucinated at created two or three fictitious cases which the lawyer didn't fact check...and he ended up submitting them in court. The other lawyers were puzzled because they couldn't find these cases. I don't exactly remember what happened to the lawyer who submitted the bogus documents, but I do recall they were sanctioned. It is a cautionary tale for anyone using AI.
 

Shaquille Oatmeal

Well-known member
Jun 2, 2023
8,249
8,852
113
That is the purpose of AI though.
AI spits out information that is already available.
You would need to fact check what it presents.
It isn't capable of knowing what is true and what isn't.
 

Maker17882

Well-known member
Jan 31, 2017
245
275
63
I have read that AI has proven itself superior (or at least equal to a human) in things like radiology, where it can detect things like cancer at an equal or higher rate than an actual doctor. I've read that drug companies are using AI to data mine results from clinical studies to help find patterns that might be obscure to the humans running the tests. It is getting better every day. But, it still fucks the pooch way to frequently. For me, I use it occasionally to maybe help with constructing an outline for a report, or as a starting point to write something in a format I'm unfamiliar with. But, it still requires a human to ensure things are accurate. I still think back to a lawyer about three years ago who asked AI to find case law to help him win a motion in court. Unfortunately, it hallucinated at created two or three fictitious cases which the lawyer didn't fact check...and he ended up submitting them in court. The other lawyers were puzzled because they couldn't find these cases. I don't exactly remember what happened to the lawyer who submitted the bogus documents, but I do recall they were sanctioned. It is a cautionary tale for anyone using AI.
Agree, it is doing well in the medical sector (Sribe talk to text for medical notes/history/charts and the cancer detection from MRI and other scans), if the task is simple (talk to text) or quatitative analysis (image analysis) it is maturing quickly. However, if an "experienced" analysis is needed still a no go like the example you note. It will get there.
 
Toronto Escorts