If you’re using AI chatbots in a regulated industry (like healthcare or banking), have your end-users shared any discomfort with using them or distrust of their output?
Sort by:
Global Chief Cybersecurity Strategist & CISO in Healthcare and Biotecha year ago
Yes, end-users have expressed discomfort and distrust of AI chatbots across industries, and with good reason! Concerns often stem from data breaches and inaccurate responses. It’s crucial to address these issues by implementing strong data security measures, clearly communicating them to users, ensuring response accuracy, seeking feedback, and being transparent about data handling practices.
Senior Director Of Technology in Softwarea year ago
We are using AI chatbot for our feedback messages. Our bot understands the response from customer and based on tonality, it starts the conversation.
We dont recommend any medicines or health related issues etc on chat but soon would venture out into it.
I have heard both concerns from multiple clients. Building small (100M - 1B parameters) language models that run on low-cost hardware works very well. Developing a single platform where all ML and AI tools are available helps keep shadow tool usage to a minimum.