When your AI chatbot invents a policy, who pays the price?

Small businesses see the promise of automation and insights. They also see the fine print – and for many, it’s reason enough to wait.

A person sits at a desk with a laptop, surrounded by plants, books, stationery, and a coffee cup, looking contemplative.

As consumers and businesses learn how to navigate within the latest technology revolution, not everyone is quick to adopt AI as a solution. There are natural concerns about AI being too complex, too expensive, or too new for a significant ROI. 

In a recent MIT study on AI in business, it was noted that of the $30 to $40 billion of investment by 52 enterprises, 95% of those studied and surveyed report zero return on investment. It’s no wonder small businesses would pause at that and other roadblocks, such as skills gaps for employees, how to bridge humans with tech, and who’s reliable if AI gets it wrong.

The AI trust and reliability deficit: why SMBs hesitate

Multiple stories about AI pitfalls, which could be viewed as cautionary tales, litter the news often. In a 2024 civil case, Moffatt vs. AirCanada, a customer claimed a special bereavement refund on an airline ticket an AI chatbot suggested, but AirCanada employees declined. Chevrolet of Watsonville was targeted by Chris Bakke who convinced an AI chatbot to sell him a new vehicle for $1, though the dealership didn’t honor the deal, it did remove the chatbot from its website. In many cases, AI chatbots go off script and end up insulting the company that “employs” it, threatening customers, or creating fake policies because of an AI hallucination.

There are a few reasons small businesses may be wary of trusting AI, including the way public chatbots gather its information. Ashley Vassell, a senior product manager at Hydrolix, a cloud data platform, noted that many clients have expressed concern about lost engagement because of AI chatbots explaining a product perhaps too well, which may prevent a user from engaging with a brand’s website.

“We’re hearing a concern around AI bots scraping their websites for information. This detracts traffic from their original websites and reduces engagement with their products and sort of takes away that ability to have ownership over their brand management, right, or control of brand messaging,” she said.

Losing user engagement is significant for many brands, particularly for retailers, but technology and service-based businesses may be more concerned about a bigger issue. Larger commercial AI chatbots could capture intellectual property or proprietary information if a company isn’t careful in how they use AI tools. If a company’s private information goes public, it could threaten a whole business model if a competitor discovers useful data. There are steps SMBs can take to protect web traffic, data, and integrate AI in safer ways.

The rest of the AI resistance

Small business owners are carefully considering whether AI is right for them at all or a solution that’s needed right now. An AI tool may not be helpful to some industries just yet, or there may be other blockers that are specific to a business.

Cost concerns

The number one concern for many businesses considering adopting new software, hardware, or a service is cost. Implementing AI may not be naturally baked into every business’ budget for operating expenses, so it can be a hard sell. A different perspective is needed for businesses to shift finances or create a new budget.

For many SMBs, AI tools can reduce manual labor through automation or analyze data to help with decision making. Through these efforts, a business may be able to operate more efficiently, reallocate resources, and reduce waste.

There’s also the cloud storage cost to consider. Pricing for data storage at the terabyte or petabyte scale is often out of range even for enterprises given that pricing can be thousands of dollars per month or year, depending on how much storage is needed. 

Skills gap

Employees and new workers aren’t necessarily reskilling or upskilling to keep up with the new tech needed to implement projects using AI. A 2025 report from the Pew Research Center shows only about one in five U.S. employees use AI in their job. According to a survey by edX, 58% of workers say AI expertise is lacking at their workplaces. A 2025 statistics report by Kelly Services shows that 64% of executives and 52% of workers recognize an AI skills gap across all industries.

The human factor

Figuring out how to combine humans and AI can be a daunting challenge. Beyond the obvious concern about AI replacing humans in the workforce are concerns about loss of expertise or skills, historical data bias, lack of oversight, and a loss of critical thinking.

In a Business Insider article, Bluesky CEO Jay Graber warns companies away from outsourcing everything to an AI tool

“AI is able to automate a lot of critical-reasoning tasks, and if we fully outsource our own reasoning, it’s actually not good enough to run in an automated fashion,” she said.

Mistral AI’s CEO Arthur Mensch told The Times of London that the biggest risk of AI to humans is that people become lazier as they rely more on the technology. 

Data dilemmas

Not all businesses have gobs of data for AI to scour for useful information, so predictive analytics may not be useful for everyone. Most machine learning models require data points in the five figures for them to be reliable. Additionally, the data you feed to an AI tool needs to be clean, or you could risk biased results, revealing proprietary information, or messy outputs. 

Unclear ROI

A big question many business owners have about AI adoption is how it will affect the bottom line. The return on investment hasn’t been obvious to many yet.

“Improvements in satisfaction and loyalty from customers may only show up as higher retention or brand equity,” Vassell said.

As Vassell explains it, a lot of the benefits of AI are intangible or indirect. Better decision making or lower attrition may not be felt right away through actual revenue, but should be shown over time.

Vendor trust

Another big question mark on the AI issue is vendor transparency. If business leaders lean on AI for decision making, they need to be able to explain why that decision is best, which is difficult if the powers that be don’t understand how AI has come to a certain conclusion. Out-of-the-box AI solutions are trained on data that isn’t always disclosed, so business owners may worry about biased answers from an AI. 

Additionally, many SMBs are worried about security and privacy. There’s a risk of data leakage, user or employee misuse, and breaches. Also, as discussed below, most vendor contracts have language to protect the vendor in the event of an AI error.

Bridging the gap: how AI consultants build confidence

Currently, AI consultants are a hot commodity for businesses big and small across many industries that want to incorporate AI. Innovating with AI’s CEO Rob Howard was quoted in Fortune magazine about the $900-per-hour price many AI consultants are getting for work with businesses looking to adopt AI.

“The pricing for this is high in general across the market, because it’s in demand and new and relatively rare to find people who are qualified to do it,” Howard said.

Having an experienced AI consultant step in to analyze a business’ need for AI and develop the strategy and roadmap for implementation can be helpful. Vassell helps companies understand how a product like Hydrolix can interpret data while also safeguarding against threats of proprietary data or Personally Identifiable Information (PII) releasing to the public.

Many AI companies have rules to protect the consumer by only collecting specific types of data, such as IP addresses. Cost tracking can also put clients at ease. Vassell explained that cost tracking of AI usage protects users from paying for AI compute they aren’t using. It also helps Hydrolix more accurately price its products while protecting profit margins. Additionally, Vassell said they’ve implemented “explainability.”

“We know that all of those pieces: trust, sensitive data, things like that are top of mind for small and medium businesses,” Vassell said. “So when we go into those kinds of customer meetings, those are the pieces that we’re highlighting.”

She used an example of how Hydrolix implemented “explainability” if, for example, a customer wants to know about an anomaly detection output and wants to know if there’s any data around how it’s coming to specific conclusions. She can show them the AI explainability data, which takes them through the process of how the data and the LLM is coming up with anomalies.

Verdict pending: who’s liable when AI gets it wrong?

If you’ve followed AI news for some time now, you are aware of AI hallucinations. For example, a prompt to an AI image creator for an image of a woman holding an ice cream cone becomes a visual of an ice cream cone with 12 thumbs holding a human or something equally odd. These glitches may be funny, but what happens when the AI hallucination happens between an AI support bot and a customer? 

In early 2025, Cursor’s AI support bot responded to customer emails about being logged off when they tried to use the coding software from multiple machines. The support chatbot emailed responses that it was “expected behavior” due to a new login policy. Except there was no new policy. The AI just invented it. Customers began cancelling their subscriptions and posting their outrage to social media and forums. 

Aside from the obvious issue (no new policy), customers didn’t know that the support person responding to emails was actually an AI bot. It was a double whammy for Cursor. 

Now imagine you’re a small business owner using an AI chatbot for support and that bot goes off the rails. Customers are angry and they’re looking to you to answer for the mess. Who’s responsible: you, the business owner, or the third-party company you’re paying for the AI bot that responded incorrectly? 

“Like any good lawyer, I’m going to start off with saying ‘it depends,’ right?” said Rob Rosenberg, a partner with the Moses Singer law firm. “I think there are a lot of variables.”

Rosenberg specializes in Intellectual Property, Entertainment/Media & Technology, and AI & Data Law, and formerly he spent more than 22 years working for Showtime Networks. 

“More often than not, I think the small business would end up taking the hit because so far I think that the AI is viewed like other business tools right now,” he said. 

He used the example of any product a business may release; if it falters in some way, the customer would blame the company, not a tool, software, or piece of equipment the company used to produce the product. So, what happens when, say, an AI hallucination becomes the focus of a lawsuit?

“The courts are still out,” Rosenberg said. “Nobody’s really ruled on that, per se, but you would think that a court at some point would say, from, like, almost like a product liability standpoint, if they took that approach, then they might say, hallucinations are the responsibility of the platform, not of the user or the person who’s implementing the solution.”

There have been cases of AI exhibiting bias or racism, sexism, or discrimination, as Rosenberg points out, and unfortunately, a lot of the programs are trained off of data scraped from the Internet. And the Internet, for better or for worse, has bias and racism and sexism, he said.

“If you’ve trained a system on bad data, that’s going to show up in the finished result,” he said. “But do you want to take a hit for a bias charge or a racism charge? You don’t. So in an ideal world that will be allocated, but regulators will come around and will allocate responsibility to the appropriate party there.”

It’s still very early. And although Rosenberg doesn’t claim to see the future, he estimates that 75% of liability for issues are going to small businesses.

“I think that it really remains a question mark as to how hard Washington will come down,” he said. “… We’re seeing some unwillingness or disinterest in putting any more roadblocks in the way that could slow down development.”

Which is why Rosenberg suggests small businesses lean into new technology, such as AI, with caution.

“I always encourage clients when something like this comes along … to get smart about it,” he said. “You shouldn’t shy away from learning as much as you can, reading as much as you can, about the technology. I don’t think it’s going away; I think it’s going to impact every business that you can think of, so it’s a matter of creating your new comfort zone.”

Advice for SMBs investing in AI

Small businesses often don’t have the leverage required to enter into enterprise agreements that may offer extra protection. Rosenberg suggests carefully reading all contracts, circling in red pen anything you’re unsure about, and consulting with a lawyer. There are some specifics to look for. 

“Right away, I put on the top of that list indemnification. I think that is always one of the most important provisions in any contract because if somebody is licensing a platform, a product or service to you, you want to know that they’re standing behind that product. And that isn’t always the case,” Rosenberg said. 

Another one that Rosenberg notes is limitation of liability, a provision where a platform is trying to limit its exposure.

“They’re saying, even if I’m at fault here, my entire exposure under this contract is limited to X. And lots of times X is just the amount of money that you’ve spent with them over the past whatever period of time. But there’s definitely a cap there,” Rosenberg said. “So they’re saying, yes, fine, we will take responsibility, but only up to a certain point.”

A third clause in a contract Rosenberg suggests small businesses keep an eye out for is a list of provisions about data use and training rights. Often, Rosenberg said, contracts may contain provisions that say that anything that you input into the system can be used by them. Which means the data you put into the system could then be available to competitors.

He also mentioned language in contracts that say you’re accepting the product as is, so in terms of assumption of risk, you’re entering an agreement in which you’re acknowledging that the product isn’t guaranteed to work. This is especially true of out-of-the-box AI products.

The AI waitlist: industries holding off on AI

Not every business should go all in on AI just yet, though implementing at a smaller scale could be helpful for when a business grows. Industries that may not benefit from a full-on AI implementation include service-based businesses that are more customer-facing such as plumbers and electricians, beauty services, and healthcare. That isn’t to say that those industries wouldn’t benefit from some level of AI, especially if you consider automations, dashboards, and the data the businesses may be collecting. 

Extracting insights on large volumes of data can be time consuming and difficult for a human to perform, Vassell said. Once AI gets involved in analyzing data, there’s a huge time and cost savings. 

“People in the data field struggle to get the full value out of their data, especially when they’re trying to extract insights in data that is at a terabyte or petabyte scale,” Vassell said. “… [AI] can read large sums of data quickly, summarize it, and present those insights in natural language, so you don’t even need to try and learn SQL.”

Vassell used the example of an organization using AI to search contract data—a task that normally would’ve taken a human up to 250 hours to do—in about 15 minutes.

Even if it seems that AI is overkill for a company that isn’t specifically in the business of data, it can end up being a cost-effective solution for business insights. Plus, AI-powered tools can help automate mundane tasks, such as sending reminders to clients, automatically calculating income and expenses, organizing multiple metrics and graphs on dashboards, and so much more.

Separator

Join free: Weekly AI insights and analysis

Trusted by 200,000+ AI enthusiasts, entrepreneurs and consultants.