Question for CIOs and heads of IT: What is the biggest challenge you face when you roll out GenAI to employees and why?
Sort by:
Does AI guard-rails and policy helps at the platform or application level, where we control what ours ( PII, customer information, company information)? <br><br>does this align?
Biggest risk is governance around responsible usage of the AI tools. Employees are notoriously ignoring standard written policies and given the SaaS nature of many AI tools, enforcing the AI policies is extremely difficult. How employees use AI tools, and what data they share? How do you know if confidential data is being used in GenAI prompts?
You’re absolutely right the biggest risk isn’t “AI misuse” in the classic sense, it’s the lack of real governance. Most organizations today are dealing with three gaps:<br><br>1. Policies exist, but behavior doesn’t follow.<br>Employees rarely read long AI guidelines, and even when they do, pressure to deliver fast makes them cut corners.<br><br>2. SaaS AI tools create a shadow-AI problem.<br>With tools being browser-based and easy to access, IT often has no idea who’s using what. That makes enforcement almost impossible.<br><br>3. Zero visibility into what’s being shared.<br>The real danger isn’t the tool it’s sensitive data casually ending up in prompts. Without visibility, leaders can’t assess exposure or risk.<br><br>The organizations managing this well are doing three things:<br><br>Treating AI governance like cloud governance. Not reactive policing, but clear boundaries and continuous oversight.<br>Creating guardrails, not just policies. Making it easier for employees to do the right thing by default.<br>Building transparency into AI usage. Not surveillance just enough visibility to know whether confidential information is at risk.<br><br>This is where most companies are heading: moving from “we wrote an AI policy” to “we actually know how AI is being used inside our organization.”<br><br>If you want, I can also share some insights how some teams operationalize this day-to-day.
Employees used sensitive and proprietary data without realizing the implications when copy and paste in AI tools. Also they don't understand model limitations and recognize biases.
Hearing same from fellow CIOs, curios to know Do you have any way today to know when sensitive data ends up in prompts, or is that still a blind spot?

Number one issue is the creation of a governance that fully embraces today's complexity and the adherence to the governance. There are hundreds or thousands of AI website accessible through the internet and IT is not able to manage the access to those sites which in turn expose a huge risk that IP is suddenly available as the users did not consider that data is used for application learning.