Has anyone started exploring the incorporation of AI tools and training into their developer communities, particularly in the government space where we tend to be more risk-averse? I’m interested in hearing about the strategies you're using to evaluate these tools and how you distinguish between those offering real value and those that might be overhyped. While tools that simplify code reviews, generate test cases, and ensure automated test coverage are top of mind, I recognize there may be other valuable AI applications. I’d love to hear how others are navigating this space, especially in areas where risk management is a key consideration.
Sort by:
Within our company we have set up an AI taskforce first. This taskforce delivered a plan on how to evaluate the value of AI for our business. It has setup the conditions. Basically the process is to convert concerns into requirements.
Technology wise, we evaluate multiple providers concurrently including OpenAI, Microsoft, Google, AWS. We arranged for corporate contracts, setup single sign-on, and set up private connectivity to these providers. Our prompts are not used for training their models.
ChatGPT licenses are handed out to a variety of users from different disciplines, and we conduct surveys to monitor the value. AI is being used at this moment, we are aware of its shortcomings, and we are investing in it further to improve our business.
I am happy to help here but I am not sure whether I can provide what the peer is looking for. In our BASF developer community we have an extended exploration program testing the GitHub Copilot. We have collected many insights and best practises and are preparing a broad roll-out. However, we are not in the governmental space but a chemical company. Would that still be interesting ? Then, I could connect the peer to my colleague orchestrating this GitHub Copilot program.