Do you think government regulation will stifle the evolution of AI technology?

1.6k viewscircle icon1 Upvotecircle icon4 Comments
Sort by:
Director of Security Operations in Finance (non-banking)4 years ago

In terms of the ethical side of AI, it's up to us to actually ensure that that occurs or we’ll see a repeat of what happened with the payment card industry (PCI). I was sloshing through eFunds Corporation (EFD) at that time when it came into fruition, because we remember the early breaches—this was the industry trying to stay ahead of a massive effort to regulate within the environment. So, if we don't figure it out and self-regulate, uncle Sam's going to come in and do it for us and it won’t be pretty and it won’t be flexible and it will stifle innovation despite their best intentions. 

And don't get me wrong, I'm equally hopeful and I'm an advocate for AI. But it's up to us having conversations like this to say, how do we ensure that AI is implemented ethically? And how do we hold implementers to that high standard so that those who don't choose ethical implementations are driven out of business due to lack of support?

Lightbulb on2 circle icon1 Reply
no title4 years ago

There's a book called Giving Voice to Values by Mary Gentile (https://www.cfainstitute.org/en/ethics-standards/ethics/giving-voice-to-values), which uses historical data on various things to show that people knew right from wrong, they just didn't have the right conversation at the time. Sometimes that was because they didn't want to bring up the uncomfortable thing. It's a fantastic book to get people to think about how to have uncomfortable but necessary conversations so that the right discussions can be heard and the right decisions can be made.

CEO and Co-Founder in Software4 years ago

It's fascinating when you look at how the AI space is evolving; there are a lot of positives. But the government is really talking about regulation on the ethical side, which scares the heck out of me. You don't want to tamper down the innovation, but it will take us as IT leaders to put some guardrails around it. It's that ethical side of AI as an industry that we have to watch out for.

Explainable AI is a newly emerging field that's pushing people to rethink how they'll use the data at every step before they actually get the results. It's a new concept started at MIT that Microsoft and Google have been investing in heavily. The Defense Advanced Research Projects Agency (DARPA) even has an open-source challenge on explainable AI. Their goal is to make sure that any federal system that uses AI or ML understands exactly what’s involved: What is my input? What happens to my intermediate rates? What happens to the bias? How is the system tracking with the data points and what is it putting out?

Lightbulb on2 circle icon1 Reply
no title4 years ago

That's probably the right way because if it's transparent, hopefully it's explainable.

Lightbulb on2

Content you might like

Finding data and putting it to good use13%

Controlling the security and privacy of data45%

Understanding how data is currently being used20%

All of the above19%

None of the above1%

View Results

HashiCorp (Terraform, Vault, Packer, etc.)22%

Cloud infra automation (Ansible, Puppet, Chef, etc.)56%

APM (Datadog, AppD, SignalFX, NewRelic, etc.)10%

Others?10%

View Results