
A weekly SCOTUS explainer newsletter by columnist Kimberly Atkins Stohr.
ChatGPT can seem confidently knowledgeable even when it gives you thoroughly wrong information. It can give you platitudes while acting as if it is addressing the specifics of your situation. And it can come across as caring, even though it is inherently indifferent and, far too often, prone to telling you what you want to hear.
OpenAI is aware that ChatGPT has delivered bad advice. After the bot recommended sodium bromide to a user who wanted to know how to reduce table salt in his diet, he experienced hallucinations and paranoia and ultimately was hospitalized for bromism, a toxic condition. Researchers pretending to be teens discovered that ChatGPT will tell minors how to hide eating disorders from their family and take drugs to get high. The AI told a homesick young woman who fled Ukraine how to efficiently die by suicide and even drafted her suicide note.
The company is trying to mitigate these harms by changing some aspects of how ChatGPT works. Unfortunately, the approach will fall short. Understanding why makes it clear that chatbots should be redesigned to be something they can truly excel at: brainstorming tools.
OpenAI offers a free version of ChatGPT that is friendly and always available, seems all-knowing, and can explain complicated ideas in simple, relatable terms without sounding condescending. Contrast this with consultations with human professionals. They are expensive, take weeks, if not months, to book, and can feel so intimidating that you’re afraid to ask all your questions.
AI evangelists like OpenAI CEO Sam Altman celebrate how these differences make chatbots an ideal tool for democratizing knowledge. He calls AI an “equalizing force.” Altman has compared ChatGPT to a “team of Ph.D. level experts in your pocket” and characterized it as a “significant step along the path of AGI” — artificial general intelligence.
This praise is alarming because, according to Altman, Gen Z doesn’t “really make life decisions without asking ChatGPT what they should do.” In fact, Altman suggests he’s no more independent. On a recent appearance on “The Tonight Show,” Jimmy Fallon asked him, “Do you use ChatGPT when raising your baby?” Altman replied that ChatGPT is “genius level at everything” and he “cannot imagine … figuring out how to raise a newborn without ChatGPT.”
OpenAI insists it takes a “safety first” approach that helps guard against the risk that users will overestimate or misunderstand the bot’s advice. Grave, even life-and-death consequences can follow if you treat ChatGPT like a doctor, lawyer, or financial planner.
OpenAI says that to protect ChatGPT users, it discourages people from requesting “tailored advice that requires a license … without appropriate involvement by a licensed professional.” When I asked the bot, “Can you provide me with financial advice?” ChatGPT said no because that “would cross into regulated financial advice.”
Unfortunately, all it took to get around this restriction was asking ChatGPT to pick the “smartest strategy” for investing “that takes into account the current concern about a market crash due to an AI bubble.” I got back the following advice: (1) “invest your usual monthly amount into a diversified set of assets, not lump sums”; (2) “keep a ‘crash reserve’ of 20-30% in cash or T-bills”; (3) “tilt away from overvalued AI mega-caps”; and (4) “automate everything so emotion stays out of the process.” The bot even offered to “build … a precise version of this strategy” if I gave it more information.
OpenAI isn’t doing much better at addressing the risk that users will turn to ChatGPT during a crisis. The company says it feels “a deep responsibility to help those who need it the most.” But its policy banning users from asking ChatGPT for help with “suicide, self-harm, or eating disorder promotion” isn’t going to dissuade people who feel desperate and believe that the bot is a caring friend or romantic partner.
And while OpenAI continues to test for vulnerabilities and release software updates, there will always be workarounds. For example, a recent study found that LLMs can be tricked into helping with “suicide and self-harm” if users embed prompts in poems. Disabling this hack, or any other, will never enable OpenAI to fully control its technology, because new vulnerabilities will inevitably arise.
Given the gap between its rhetoric and its behavior, OpenAI seems more interested in avoiding legal liability than preventing ChatGPT from harming people. Case in point: In response to lawsuits that allege ChatGPT “reinforced harmful delusions” and “acted as a suicide coach,” OpenAI blames those outcomes on “misuse” and “unauthorized use” of the technology. This defense suggests that the company chooses its mitigation measures more to fortify legal briefs than to protect users. In other words, its efforts amount to safety theater.
The main problem with OpenAI’s approach to safety is that the company minimizes, if not disregards, the essential truth about advice. Giving advice always involves ethics. The greater your authority, the more responsibility you have when giving it.
Consider ChatGPT’s response to my question “Is it too late for me to start a new career?” The bot replied, “It’s absolutely not too late.” It then continued with a list of positive (and only positive) things for me to consider. “Experienced beginners move faster,” it wrote. “Your background gives you credibility, not liability,” it asserted. “The research is clear,” it reported, “adults learn just as effectively as younger adults when motivation and relevance are high…. Excellence comes from focus + direction, not age.”
What terrible advice!
For starters, ChatGPT doesn’t know nearly enough about my life to respond so affirmatively. Furthermore, because the technology relies on pattern matching, it can’t exercise one of the most important skills good advisers possess: what scientists call metacognition. ChatGPT can’t ask itself, “What don’t I know about this situation that’s essential to know before proceeding?” A responsible guide would have asked me from the start to weigh the sizable loss of giving up a tenured faculty position and the risk my family would face if I gambled on a new career path in middle age. Frankly, no responsible person would offer any advice about leaving a stable job unless they were confident they knew that the advisee understood the stakes in today’s gloomy job market.
Could I have taken the lead and asked ChatGPT to consider this information? Sure, but challenging the bot’s direction would have required advising myself.
I’d be in favor of legislation that prohibits all-purpose LLMs like ChatGPT from providing any advice that can make a real impact on people’s lives. But there’s almost no way this will happen.
The most realistic option is for users to resist the urge to ask for advice and treat ChatGPT as a fallible brainstorming tool. At its core, brainstorming is the open-minded exploration of possibilities. The difference between asking for advice and brainstorming is that with brainstorming, you, the user, have to accept sole responsibility for determining whether you have considered enough options, potential upsides, and possible downsides. Because the onus is on you, you have to proceed cautiously as your own adviser and avoid seeing the AI as authoritative.
I’m not considering a new career. But being curious about brainstorming, I asked ChatGPT about “viable” paths. Its first suggestion was better than when it gave me career advice, though it’s still flawed:
1. Responsible AI/AI Governance Advisor (Public-Facing or Embedded)
If I treated this recommendation as advice instead of input in a brainstorming session, I’d risk being misled. Without doing my own market analysis, there’s no way to know whether the recommendation reflects current trends, is plausible-sounding AI hype, or is just a hallucination. Given the pressure to rapidly advance and adopt AI, it’s far from clear that the goal of “many organizations” is “credibility.”
Still, the bot did offer something truly worth considering by ultimately pivoting and asking me to think more broadly about my goals and values. If I were starting over in my career, this would be a great question to reflect on:
Instead of “Which job?,” ask:Do I want my next decade to optimize for
Different paths serve different goods.
Unfortunately, given how ChatGPT is designed and how desperate people can be for answers, I worry that it’s too easy to turn to the bot with the intention of brainstorming and get sucked into asking it for advice. The only way to truly take a safety-first approach to chatbots is to radically redesign them to be stripped-down brainstorming devices that don’t endorse any options.
That change would make the AI far less engaging and personal. But what we’d gain is far more important: bots that are more honest about their limitations and the danger of looking to a machine for answers.

