Question for CIOs and heads of IT: What is the biggest challenge you face when you roll out GenAI to employees and why?

2.5k viewscircle icon6 Comments
Sort by:
CIO2 months ago

Number one issue is the creation of a governance that fully embraces today's complexity and the adherence to the governance. There are hundreds or thousands of AI website accessible through the internet and IT is not able to manage the access to those sites which in turn expose a huge risk that IP is suddenly available as the users did not consider that data is used for application learning.

Lightbulb on1 circle icon1 Reply
no titlea month ago

Does AI guard-rails and policy helps at the platform or application level, where we control what ours ( PII, customer information, company information)? <br><br>does this align?

VP of IT in Construction2 months ago

Biggest risk is governance around responsible usage of the AI tools. Employees are notoriously ignoring standard written policies and given the SaaS nature of many AI tools, enforcing the AI policies is extremely difficult. How employees use AI tools, and what data they share? How do you know if confidential data is being used in GenAI prompts?

Lightbulb on2 circle icon1 Reply
no titlea month ago

You’re absolutely right the biggest risk isn’t “AI misuse” in the classic sense, it’s the lack of real governance. Most organizations today are dealing with three gaps:<br><br>1. Policies exist, but behavior doesn’t follow.<br>Employees rarely read long AI guidelines, and even when they do, pressure to deliver fast makes them cut corners.<br><br>2. SaaS AI tools create a shadow-AI problem.<br>With tools being browser-based and easy to access, IT often has no idea who’s using what. That makes enforcement almost impossible.<br><br>3. Zero visibility into what’s being shared.<br>The real danger isn’t the tool it’s sensitive data casually ending up in prompts. Without visibility, leaders can’t assess exposure or risk.<br><br>The organizations managing this well are doing three things:<br><br>Treating AI governance like cloud governance. Not reactive policing, but clear boundaries and continuous oversight.<br>Creating guardrails, not just policies. Making it easier for employees to do the right thing by default.<br>Building transparency into AI usage. Not surveillance just enough visibility to know whether confidential information is at risk.<br><br>This is where most companies are heading: moving from “we wrote an AI policy” to “we actually know how AI is being used inside our organization.”<br><br>If you want, I can also share some insights how some teams operationalize this day-to-day.

Director Certifications in Education2 months ago

Employees used sensitive and proprietary data without realizing the implications when copy and paste in AI tools. Also they don't understand model limitations and recognize biases.

Lightbulb on2 circle icon1 Reply
no titlea month ago

Hearing same from fellow CIOs, curios to know Do you have any way today to know when sensitive data ends up in prompts, or is that still a blind spot?

Content you might like

Yes—we've already passed the AGI line, but we keep redefining it.18%

Almost—these systems feel general, but something's still missing.45%

No—AGI must be autonomous, embodied, or conscious. We're not there.27%

It doesn’t matter—what we have now is disruptive enough.7%

The term AGI is a distraction. Focus on outcomes, not labels.4%

View Results

Yes, always.19%

Yes, sometimes.59%

No, never.20%

It depends (tell us in the comments!)2%

View Results