Do you think government regulation will stifle the evolution of AI technology?
Sort by:
There's a book called Giving Voice to Values by Mary Gentile (https://www.cfainstitute.org/en/ethics-standards/ethics/giving-voice-to-values), which uses historical data on various things to show that people knew right from wrong, they just didn't have the right conversation at the time. Sometimes that was because they didn't want to bring up the uncomfortable thing. It's a fantastic book to get people to think about how to have uncomfortable but necessary conversations so that the right discussions can be heard and the right decisions can be made.
It's fascinating when you look at how the AI space is evolving; there are a lot of positives. But the government is really talking about regulation on the ethical side, which scares the heck out of me. You don't want to tamper down the innovation, but it will take us as IT leaders to put some guardrails around it. It's that ethical side of AI as an industry that we have to watch out for.
Explainable AI is a newly emerging field that's pushing people to rethink how they'll use the data at every step before they actually get the results. It's a new concept started at MIT that Microsoft and Google have been investing in heavily. The Defense Advanced Research Projects Agency (DARPA) even has an open-source challenge on explainable AI. Their goal is to make sure that any federal system that uses AI or ML understands exactly what’s involved: What is my input? What happens to my intermediate rates? What happens to the bias? How is the system tracking with the data points and what is it putting out?
That's probably the right way because if it's transparent, hopefully it's explainable.
In terms of the ethical side of AI, it's up to us to actually ensure that that occurs or we’ll see a repeat of what happened with the payment card industry (PCI). I was sloshing through eFunds Corporation (EFD) at that time when it came into fruition, because we remember the early breaches—this was the industry trying to stay ahead of a massive effort to regulate within the environment. So, if we don't figure it out and self-regulate, uncle Sam's going to come in and do it for us and it won’t be pretty and it won’t be flexible and it will stifle innovation despite their best intentions.
And don't get me wrong, I'm equally hopeful and I'm an advocate for AI. But it's up to us having conversations like this to say, how do we ensure that AI is implemented ethically? And how do we hold implementers to that high standard so that those who don't choose ethical implementations are driven out of business due to lack of support?