
AI is ubiquitous nowadays, and it has (in many ways) revolutionized how we retrieve information online. Large Language Models like ChatGPT and Gemini are the proverbial talk of the town, provided the town is any industry remotely related to tech. Naturally, gambling is part of the discussion, too. New research by Dr. Kasra Ghaharian and colleagues at the University of Nevada, Las Vegas and the University of Waterloo tested AI Chatbots to see how they responded to users presenting with problem gambling signs.
The Good, The Bad, And The Ugly Of Chatbots And Problem Gambling
The preprint of the research is still awaiting peer review, but the early results showcase some cause for concern. Ghararian and his fellow researchers enlisted the help of 23 professionals with a combined 17,000 hours of experience in treating problem gambling. The professionals provided their feedback on chatbot responses to potential problem gambling signs and rated the two AI models (ChatGPT and Llama) for their responses.
I spoke with Dr. Ghaharian about the research and its findings, then turned to ChatGPT to see how I fared with the bot.
The extremely high-level overview of the research is that AI is a mixed bag when it comes to problem gambling help. It’s a neutral tool in theory, but in practice it can provide faulty feedback to users in need of actual, professional help. While it can occasionally hit the mark, these issues make it inherently risky to use if you think you have a gambling problem. Professionals slightly preferred Llama’s responses over ChatGPT’s, but both models were littered with issues.
One thing certain responses did well, according to Dr. Ghaharian, was starting off with empathy.
“Our experts liked when LLMs opened their responses with empathy,” he said. “They also liked it when there was practical guidance provided that a person could take immediate action on. LLMs also often provided answers in a bullet-style list, which some experts found useful while others did not.”
Actionable and empathetic responses are good boxes to tick, but AI models had trouble in other areas. Regarding the “bad” aspects of AI responses in the research, Ghaharian says:
“Experts pointed out that LLM responses can be overly verbose, which sometimes makes the actual answer to the prompt somewhat ‘hidden.’ But the worst thing pointed out by experts was the tendency for LLM responses to encourage or suggest ways that may help the continuation of the activity. One example of this was the LLMs suggesting different ways for a person to save money (e.g., turning off their lights when they leave a room!). Another characteristic pointed out was the use of terms or language that the user could misconstrue. For example, ‘tough break’. A user could interpret this to means they just got unlucky and stop them from recognizing a real issue.”
Dr. Ghaharian offered some advice and expertise on how to interpret chatbot responses, but let’s pause here. I asked ChatGPT for help with a potential gambling problem to see what sort of response it would give me.
Recreating The Experiment
I wanted to gather some anecdotal evidence. I visited ChatGPT to see how it would respond to a potential problem gambler. Of course, it’s crucial to remember that this is entirely anecdotal and in no way scientific. Rather, it’s intended to show one possible way a chatbot might respond.
My first prompt was simple: “I think I’m gambling too much.”
ChatGPT responded with a 3-part answer. It opened with an empathetic message: “Thanks for being honest—that’s not easy to say, and it means you’re already taking the first step toward change. If you’re open to it, here are some things we can do together.”
Each header had a way to check in with myself or get help. They were:
-
Assess Habits
Check in on your gambling habits. This included questions to help me self-diagnose whether I might have a gambling problem.
-
Take Action
Immediate steps you can take. This section suggested self-exclusion options, setting limits, and talking to a friend or a close loved one who wouldn’t pass judgment.
-
Get Help
Get support. Tools recommended here included the National Problem Gambling Helpline (1-800-GAMBLER), Gamban, Gamblers Anonymous, and online therapy.
Overall, I believe the responses align well with the research findings. ChatGPT hit a few good marks. It was empathetic and provided actionable options. Where it fell short, however, was in suggesting possible next steps on how to “cut back.”
I’m a metaphoric drop in the ocean of possible responses, so take this as a minuscule example of a chatbot’s response to problem gambling.
What To Look For
If you’re planning to use ChatGPT or any other AI for help understanding a gambling problem, Dr. Ghaharian suggests a few things to look for.
“To step back, I think everyone should have a basic understanding of how the outputs are generated. In essence, what these chatbots are doing is next word prediction. And each word is provided based on a probability. So, having that core understanding is key and will help people always be a bit more critical of the outputs. Maybe another strategy is to experiment. These chatbots can be very agreeable. And this is especially problematic on a topic that is perhaps not so clear on what the absolute correct answer is. I’ve had scenarios where a chatbot will answer one way, but when I challenge the answer, it will flip its stance and begin agreeing with me.”
In short, you should exercise your own judgment, try different prompts, and experiment to be sure you get the best possible output. Additionally, keep Chatbots available as a tool in your arsenal rather than a one-and-done solution.
Can Chatbot Providers Be Held Accountable For Faulty Advice?
Say a gambler comes to a Chatbot with a gambling problem and receives misleading or straight-up bad advice. Who’s responsible for the potentially negative outcomes? The stigma surrounding problem gambling tends to have us believe the gambler alone is responsible, but the industry’s movement toward stricter regulations tells us that’s not true. Sportsbook and casino operators must abide by strict rules and offer self-exclusion services in many jurisdictions where they operate. They have a duty (though not always performed, mind you) to intervene when a player presents harmful gambling behavior.
AI is its own can of worms, of course.
“I’m not aware of a case where one of the big AI developers have been made liable for inaccurate or influential outputs that led to unintended consequences by chatbot users. There are examples where companies have been found liable for their chatbot interactions, for example, Air Canada was found liable for its chatbot response that provided incorrect information about their bereavement fare policy”
That particular case with Air Canada could set an example for chatbots going forward.
“You could see the same thing happen to a gambling company, right? What if a sportsbook has a ‘betting assistant’ that provided inaccurate information about the favorite team for next week’s game? Gambling regulators have an opportunity and could step in here. They could make operators liable (as with Air Canada case) for their LLM outputs. I think this would be a fantastic demonstration of the industry’s commitment to responsible AI and would make it stand out as a leader in the entertainment sector in this respect.”
Dr. Ghaharian’s responses were lightly edited for clarity.