Are short term bans of the use of GenAI applications and tools (such as ChatGPT) a good idea for end users in most organizations? For context see this article on the State of Maine Government's directive before you vote: https://www.govtech.com/artificial-intelligence/chatgpt-generative-ai-gets-6-month-ban-in-maine-government
Yes - Maine did the right thing. There are too many security risks with free versions of these tools. Not enough copyright or privacy protections of data.23%
No, but.... - You must have good security and privacy policies in place for ChatGPT (and other GenAI apps). My organization has policies and meaningful ways to enforce those policies and procedures for staff.44%
No - Bans simply don't work. Even without policies, this action hurts innovation and sends the wrong message to staff and the world about our organization.24%
I'm not sure. This action by Maine makes me think. Let me get back to you in a few weeks (or months).8%
Sort by:
Simply put. No one is stopping any younger generation employee from using AI in some way whether on work assets or their own personal assets. It is a way of life. I had the opportunity recently to mentor a group of high school students about tech, AI, future of jobs, etc. With these young people, it isn't even a question of "do you use AI", but rather a conversation about responsible and effective use. Organizations need to embrace the future, albeit employee guidance and training and even restrictions on corp IP/data and in some cases no use on corporate assets. However, like in many things in life, people will find a way, anyway. Prepare for the future, it is tomorrow.
Yes - because you need time to get policy and governance published, training organised, data tagged for DLP, security updated, etc. Staff need to know they shouldn't be using it without clear guidelines, especially free tools. Nothing is secret so don't put anything sensitive/confidential in. But we know people will use them anyway and we know that we need to extend and embrace so it can only be a temporary block.
Yes - in our organization we blocked ChatGPT and other public/open AI models until we had training and monitoring resources configured. We then incorporated responsible AI use in our annual security awareness training and set up DLP policies to monitor. We want employees to experiment and innovate, but we also want to ensure our IP and PII is not at risk. We have since created our own generative AI model using Azure OpenAI and encourage employees to take advantage of that, Copilot, Power BI and PowerApps within our secured network, vs using externally hosted tools. Finally, we have embedded AI functionality review in our vendor risk management process and are in the process of putting together an inventory of AI use cases in our organization so we can risk-rank them and incorporate periodic assessment within our existing processes.
We are all professionals in our fields, and we know how to surround ourselves with vertical expertise in the necessary domains to responsibly and consciously tackle these innovations. We are aware of the risks of adoption, but this is precisely why our companies call upon us to evaluate and understand the best methods to embark on these paths.
Banning, like the prohibition of past times, belongs to other areas that I define as geological.
Short term bans are ok to allow some breathing room to get at least basic policy, governance, and systems in place. If it's allowed to stretch much beyond the six months, bans will produce accelerating risks of non-compliance over time. While I'm disappointed by the amount of over-the-top hype about AI and the tendency to engage in magical thinking about what it can solve, the fact remains that it's a very useful, game-changing tool when properly applied. People are going to use AI because they see it's utility. The more they see other people using AI to successfully reduce workload the greater the temptation will be to engage with it regardless of a ban. Risks that have not materialized into direct, consequential problems are not a deterrent to the average user; otherwise things like this, https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html, would not keep happening.