If you’re using AI chatbots in a regulated industry (like healthcare or banking), have your end-users shared any discomfort with using them or distrust of their output?

2.2k viewscircle icon1 Upvotecircle icon3 Comments
Sort by:
Chief Data Officer in Mediaa year ago

I have heard both concerns from multiple clients. Building small (100M - 1B parameters) language models that run on low-cost hardware works very well. Developing a single platform where all ML and AI tools are available helps keep shadow tool usage to a minimum.

Global Chief Cybersecurity Strategist & CISO in Healthcare and Biotecha year ago

Yes, end-users have expressed discomfort and distrust of AI chatbots across industries, and with good reason! Concerns often stem from data breaches and inaccurate responses. It’s crucial to address these issues by implementing strong data security measures, clearly communicating them to users, ensuring response accuracy, seeking feedback, and being transparent about data handling practices.

Lightbulb on1
Senior Director Of Technology in Softwarea year ago

We are using AI chatbot for our feedback messages. Our bot understands the response from customer and based on tonality, it starts the conversation.

We dont recommend any medicines or health related issues etc on chat but soon would venture out into it.

Lightbulb on1

Content you might like

Finding data and putting it to good use13%

Controlling the security and privacy of data45%

Understanding how data is currently being used20%

All of the above19%

None of the above1%

View Results

HashiCorp (Terraform, Vault, Packer, etc.)22%

Cloud infra automation (Ansible, Puppet, Chef, etc.)56%

APM (Datadog, AppD, SignalFX, NewRelic, etc.)10%

Others?10%

View Results