Using chatGPT to answer a flea question

This area is for the discussion of what's new, what's on your mind, and general photographic topics. A place to meet, make comments on this site, and get the latest community news.

Moderators: rjlittlefield, ChrisR, Chris S., Pau

dy5
Posts: 122
Joined: Sun Feb 07, 2010 7:50 pm
Location: College Park, MD

Using chatGPT to answer a flea question

Post by dy5 »

ADi posted a great photo of a flea in the Technical and Studio Macro forum. It got me wondering about some flea structures, called the genal and pronotal combs.

Just for fun, I asked chatGPT to tell me the function(s) of the combs. It quite quickly came up with an authoritative-sounding response. The response did contain the accepted function of anchoring the flea to its host's hair. It also gave two other functions that seemed unlikely: for grooming itself including removing debris from the body, and for detecting and locating hosts.

To follow up, I asked chatGPT to tell me where it got its information. It gave three papers as its sources. The citations looked perfect. I'm a university professor and do literature searches all the time, but I tried every trick I know, including trips to each journal's archives, and could not find the papers. It appears that none of those papers actually exists.

Providing incorrect information is bad, of course. But what really floored me were the citations. The authors were appropriate, the titles were appropriate, and the journals were appropriate. Even the publication years made sense based on previous flea research and on the authors' publication history. Creating such deeply plausible citations is really quite an amazing accomplishment. Disturbing, as well.

Cheers, David

rjlittlefield
Site Admin
Posts: 23564
Joined: Tue Aug 01, 2006 8:34 am
Location: Richland, Washington State, USA
Contact:

Re: Using chatGPT to answer a flea question

Post by rjlittlefield »

Your experience seems to be universal. ChatGPT is very good at making up references that sound great, but in fact are totally bogus. For me it provided completely plausible everything including DOI's (Digital Object Identifiers),. But when I tried to track down the references I discovered that no articles by those titles existed, and the DOI's were either not assigned to anything, or were assigned to something else altogether.

As I understand it, the model that underlies ChatGPT simply has no mechanism for recording real references, so the only choices are to either make something up or decline to answer.

What bothers me the most is that the creators of ChatGPT had the capability of declining to answer, and chose to make stuff up instead.

In the best of all possible worlds, I can imagine that they made that choice specifically to raise awareness of the general issue of misinformation. But I have no reason to think that was actually the case, or that the effort will be successful even if it was.

--Rik

dy5
Posts: 122
Joined: Sun Feb 07, 2010 7:50 pm
Location: College Park, MD

Re: Using chatGPT to answer a flea question

Post by dy5 »

Same experience with the DOIs - in fact, that's what got me suspicious initially.

In retrospect, there were some subtle warning signs: 1) the combination of papers was almost too good to be true; 2) the titles were a little too simple = short, minimal jargon; 3) all the titles had a similar, uncomplicated grammatical structure. Having said that, any single citation in isolation would have been completely believable.

I gave chatGPT a couple of questions that were on an exam my students took recently. The answers were OK, but had similar warning traits: factually OK, but simplistic and without anything interpretive; repetitive sentence structure; no grammatical errors (a give-away); vague in places (students can also give vague answers, but they do it in more creative ways). The scores for the answers would have been high, but not in the top tier.

Beatsy
Posts: 2105
Joined: Fri Jul 05, 2013 3:10 am
Location: Malvern, UK

Re: Using chatGPT to answer a flea question

Post by Beatsy »

Now watch the overall standard of mainstream printed news and other articles drop precipitously (as if that were possible) as oh-so-overworked hacks use ChatGPT to meet their deadlines - without checking the copy! There's plenty of AI-generated tripe turning up in the comment sections of a few political blogs I read too. Most looks very plausible given a skim-read, but there's something "recognisable" about it that makes it stand out. For now...

Adalbert
Posts: 2427
Joined: Mon Nov 30, 2015 1:09 pm

Re: Using chatGPT to answer a flea question

Post by Adalbert »

Hi David,
OK, in terms of facts, AI still has room for improvement.
But in terms of tone, the comments generated by AI are quite nice and friendly.
There is neither envy nor hatred nor anything similar in them. So far …
Best, ADi

MarkSturtevant
Posts: 1946
Joined: Sat Nov 21, 2015 6:52 pm
Location: Michigan, U.S.A.
Contact:

Re: Using chatGPT to answer a flea question

Post by MarkSturtevant »

I had read before about ChatGTP using made up references. That is indeed a strange flaw, since isn't it using online sources (with references) to compose its answers?
I teach a capstone class for our biology majors, and they do write a term paper. I did several test runs thru ChatGPT to see how it would write sections of the paper, with references. I had checked the accuracy of the references and can confirm that they seem fake! They look ... ok, if a little too perfect. I recognize the names of many of the authors. The journal year and volumes are correct (for example Philosophical Transactions of the Royal Society B: Biological Sciences is real, and the year 2015 is volume 370. But the paper it cites does not exist.
So why make that up? I wonder if this is meant to be a way for people to spot these deepfakes. I play around with the art generator Dall E-2 from time to time, and one thing about it is that it seems to deliberately mess up faces. Not only the faces of people, but also of animals. So I wonder if that too is a thing put in to limit abuse of these technologies.
Mark Sturtevant
Dept. of Still Waters

Chris S.
Site Admin
Posts: 4042
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Re: Using chatGPT to answer a flea question

Post by Chris S. »

My bet is that false references are a product of how LLMs (Large Language Models) like ChatGPT work--and don't work. They have "read" a great volume of material, and use this to predict what words should go next. They are somewhat like giant word clouds in multiple dimensions, with weightings. They don't store facts per se, and don't store sourcing information for the word relationships they have shoveled up and weighted. (Or so I think--am trying to wrap my head around this technology but am no expert in it.)

So when you ask for a reference, the bot produces something that looks very like a reference, based on its ability to predict what a reference looks like. But nobody has programmed the bot to produce actual references. Bing's implementation of LLM technology recently claimed to provide sourcing, but the example given was weak--links to food recipes that could easily have come from a simple Web lookup.

I recall seeing a claim that someone was working on creating a bot with more robust referencing capability, but looking just now, I can't find that reference. Conceptually, I think it would be difficult, and require orders of magnitude more processing and storage than current LLMs. This said, things are progressing swiftly, and the pace of change seems to be accelerating.

Current chatbots can't do real references because they make everything up. What they make up often reads to us as true, at least for subjects where the bot has sufficient training to have a big enough word cloud and useful probability weightings. But the bot does not know truth from fiction, or even have a process that defines these concepts.

I see current chatbots as having mostly mastered interfacing with humans. Further processes to define facts and store references strike me as additional--very substantial--developments to add.

A thing that concerns me is that we humans have heuristics for separating sense from nonsense, and chatbots can bypass these heuristics. While they make everything up, their correct grammar and spelling, and decent word production, can make them seem intelligent. And since--at least in some subjects--most of what they say matches actual truth--we can find it easy to believe them. But they are just making everything up. . . .

Members of this forum, by and large, have significant mastery of skepticism. But not everyone in the world is skeptical.

--Chris S.

Edit to add:

After posting, I decided to try pasting my post into ChatGPT, and ask the bot to proofread it and point out any changes, thinking perhaps I missed a comma or left out a word. The reply:

"I did not make any changes to your text as it was well-written and did not require any proofreading or editing. It was a clear and concise explanation of the limitations of current chatbots in terms of providing references and discerning truth from fiction."

MarkSturtevant
Posts: 1946
Joined: Sat Nov 21, 2015 6:52 pm
Location: Michigan, U.S.A.
Contact:

Re: Using chatGPT to answer a flea question

Post by MarkSturtevant »

Chris S. wrote:
Wed Mar 22, 2023 8:17 pm
My bet is that false references are a product of how LLMs (Large Language Models) like ChatGPT work--and don't work. They have "read" a great volume of material, and use this to predict what words should go next. They are somewhat like giant word clouds in multiple dimensions, with weightings...
That makes sense with some other flaws that I have seen, and it explains where some peculiarities of the bot have been reported. In a case that I had seen personally, I asked it to write a description of embryonic development of the fruit fly Drosophila melanogaster (this being an important model for animal genetics, and is the basis of the term paper that I have students write). Its description was badly wrong in that it wrote a very generic description of animal development, which is NOT what insect embryos are like at all. I would have failed that description. But I bet it assembled it from commonly used words around the subject of development.

In another instance I've read about, the bot was asked to write about "junk" DNA. It proceeded to describe a commonly claimed but wrong view about the subject.
Mark Sturtevant
Dept. of Still Waters

Chris S.
Site Admin
Posts: 4042
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Re: Using chatGPT to answer a flea question

Post by Chris S. »

For whatever it's worth, I've just now had a chance to play a bit with Google's "Bard" chatbot. Compared with ChatGPT, it seems primitive. It does, however, attempt to provide references, but these resemble the results of a poorly-done Web search. Google clearly has a lot of catching up to do.

Meanwhile, this got announced yesterday: ChatGPT gets “eyes and ears” with plugins that can interface AI with the world--Plugins allow ChatGPT to book a flight, order food, send email, execute code (and more).

--Chris S.

Scarodactyl
Posts: 1619
Joined: Sat Apr 14, 2018 10:26 am

Re: Using chatGPT to answer a flea question

Post by Scarodactyl »

I tried to use the bing ai to search for microscope stuff. I was not very impressed.

Chris S.
Site Admin
Posts: 4042
Joined: Sun Apr 05, 2009 9:55 pm
Location: Ohio, USA

Re: Using chatGPT to answer a flea question

Post by Chris S. »

MarkSturtevant wrote:
Wed Mar 22, 2023 9:21 pm
. . . Its description was badly wrong in that it wrote a very generic description of animal development, which is NOT what insect embryos are like at all. I would have failed that description. But I bet it assembled it from commonly used words around the subject of development.

In another instance I've read about, the bot was asked to write about "junk" DNA. It proceeded to describe a commonly claimed but wrong view about the subject.

I just now came across a nicely-phrased description of what AI chatbots produce (emphasis mine):
". . . the thing that makes AI distinct from humans is that it’s “extremely consistently average,” says Eric Wang, Turnitin’s vice president of AI.

Systems such as ChatGPT work like a sophisticated version of auto-complete, looking for the most probable word to write next. “That’s actually the reason why it reads so naturally: AI writing is the most probable subset of human writing,” he says."


Source: "We tested a new ChatGPT-detector for teachers . . . ." by Geoffrey A. Fowler, The Washington Post
So it makes sense that generative AI--at least at present--would produce output matching a widely-held, if incorrect, viewpoint.

--Chris S.

Post Reply Previous topicNext topic