5 Comments
May 2, 2023Liked by Cameron R. Wolfe, Ph.D.

I really enjoyed this edition and look forward to the next part of the series, Cameron. I've been meaning to learn more about this, so your newsletters timing could not be better :)

Expand full comment
May 1, 2023Liked by Cameron R. Wolfe, Ph.D.

And...there was no way that I could see, on how to prompt (now I do come close to your topic here) so that it would not fabricate references...

Expand full comment
May 1, 2023Liked by Cameron R. Wolfe, Ph.D.

Entirely off topic, Cameron, please excuse me. If you have covered this elsewhere, you'll send me the link: thank you. I was surprised when ChatGPT fabricated both references that were written out (authors, title, journal, pages, year), as well as links to journal articles (such as to a Nature Communications article that did not link--because it did not exist). My nephew explained that this is referred to perhaps by folks who make or study LLMs as "hallucinating." Can you comment about this? Are future versions going to eliminate this? It was frankly amazing to me how quickly it could fabricate references (in my particular case, about studies on the triple point of water). It repeated the same pattern after I would respond by saying that those two provided references were not real references--by apologizing, and then generating another 2 false references, politely explaining that these 2 were correct. After about 10 cycles of this, and thus 20 false references, I realized that it could do this literally ad infinitum. Welcoming feedback on this, thank you.

Expand full comment