What are your preferred tactics for building effective collaboration on cross-functional teams involved in AI governance and risk management (e.g., joint steering committees, shared KPIs, etc.)? Which roles are currently involved?

623 viewscircle icon1 Upvotecircle icon5 Comments
Sort by:
Director Global Infrastructure Architecture and Network Ops in Healthcare and Biotecha month ago

I focus on a mix of structured and collaborative approaches. Regular joint steering meetings with IT, cybersecurity, compliance, legal, data science, and business leaders help align priorities and risk tolerance. Shared KPIs keep everyone accountable, while smaller working groups tackle specific areas like data quality or model validation. Open communication and transparency are key, so issues are surfaced early and teams stay coordinated.

Director Information Security & Trusta month ago

At Salona, collaboration typically occurs one to two levels below the C-suite. These individuals are responsible for conducting risk assessments and determining the appropriate course of action for the company. C-level executives are brought in when risk acceptance decisions need to be made or when a risk is deemed too high for the business to accept.

Collaboration across teams is essential for effective AI governance and risk management. Our risk team works with every department; our governance and compliance team collaborates with all groups that have control responsibilities, and our application security team partners closely with developers. If any team operates in a silo without collaborating with others, it raises important questions about their role and integration within the organization.

Chief Information Security Officera month ago

As CISO, building cross-functional relationships is a primary responsibility. At Hoag Health System, we have a governance group specific to risk, called ITRC. This group is multi-purpose and can pivot to address AI technology, standardized architecture, and other topics as needed. The composition of the group varies depending on the business unit or program, as risk thresholds are not uniform across the organization.

Compliance-driven processes are straightforward, as we follow regulatory requirements and assess whether we are secure or simply compliant. Our cross-functional teams include roles such as the Chief Digital Officer, whom I meet with regularly. We employ a shift-left methodology, involving my team at the project initiation stage to assess risk and review contractual agreements. This early engagement ensures that risk management is embedded from the start.

Director of Engineeringa month ago

We have established a council that brings together executives from various areas, including the CIO, CEO, legal counsel, and other relevant entities. This council evolved from our data governance board, and some members participated in both groups. The framework has proven effective, so we spun it off to ensure we have the right stakeholders involved in AI governance.

Our approach is not about prohibiting innovation, but rather about understanding and supporting it. The council helps clarify the controls we are implementing and the reasons behind them. Much of our early work focused on education, especially in defining technical terms for non-technical participants. Over time, the process has become more aligned with business objectives and goals, moving away from ad hoc requests to a more structured and strategic approach. Collaboration has strengthened, and the need for strategic relationships with C-level executives has increased as AI becomes more integrated into our daily operations and strategy.

Director of ITa month ago

Building strong collaboration across teams for AI governance and risk management isn’t about creating more bureaucracy – it’s about clarity and trust. Start by setting up a governance council that brings together business, tech, legal and compliance voices. Make sure everyone knows their role with clear ownership and accountability – things like RACI charts work well here.

Ensure you integrate governance into the AI lifecycle, and don’t leave legal and compliance as an afterthought. Use responsible AI standards and risk assessments from design through to deployment. You can use dashboards and collaboration tools help keep everyone on the same page and make risks visible.

Policies should not be static – regulations change fast. Ethical design also matter including bias checks, transparency features and mandatory training on responsible AI should be standard practice. Use tools/metrics to monitor.

Create cross-functional task forces to review priorities, approve use cases and importantly share lessons learned. You can build real-time collaboration tools and AI-powered dashboards give visibility and help track compliance. You may need human oversight for critical decisions.

ROles can be broad, starting with an executive sponsor to set direction and secure budget, AI leaders to build/steer strategy and ethics. ENsure there are security resources for data protection, risk and compliance resources to keep you aligned, engineers to implement safeguards, and business leads to make sure it all ties back to customer/business outcomes.

Content you might like

Every Month19%

Every Quarter58%

Every Year20%

Other (comment below)2%

View Results

Yes66%

No33%