Any best practices your team follows when using AI solutions at work? Thinking of security tips, AI-generated content review, training, etc.
Sort by:
IT Manager2 years ago
It all depends whether you are looking at public or private models as each pose similar yet different challenges, primarily due to the level of control available and the perceived risks - which usually differ widely between the users, data security and cyber teams.
Currently I would suggest strong policy statements over what's acceptable, easy and accessible user education on the benefits and pitfalls (always aligned with the policy) and executive risk acceptance that by facilitating access, unforeseen issues may arise.
Focus on prompt injection as a major security consideration. This is quite broad, and with quick proliferation of agentic AI, as well as the massive growth of MCP landscape, impact will grow as prominently.
As far as training, conceptually technology is the same - just leveraged differently by various tools. Knowing fundamentals (e.g., a difference between generative and agentic AI, or these and diffusion, etc.) would help setting the baseline - rest is trainable per whatever serves the purpose.
Last but not least, with a massive hype wave on "AI" itself, people tend to forget about how much their data is ready to be used by LMs - so ai-ready data governance is of massive importance.
Ultimately, think of AI as a transformational force, changing your enabling processes - rather then feeding into them. Such angle of view would allow you to skip the dip of "disappointment" with the tech, and get to use it efficiently, early.