Which pitfalls—model bias, false positives/negatives, data quality, regulatory constraints—often impede AI-based security tools, and how can they be mitigated in a financial-services context?

3.4k viewscircle icon4 Upvotescircle icon2 Comments
Sort by:
Director9 days ago

Data, especially biased data, is a huge concern. Companies just starting should consider "synthetic data" to test the integrity of their AI. Another pitfall not mentioned is worker bias towards AI, will they use it in the first place?

Lightbulb on1
VP of Engineering10 days ago

First and foremost, I would not recommend going down to model level when trying to implement digital security. Use the tools where much of low-level issues you mention are being addressed by tool's product team. Basically, don't build - let others do it right, and use the result of that work.

And for data quality - AI specifically relies on prompts, most of the businesses have data landscape very conductive to contain prompt injections. That must be addressed with great attention, otherwise no matter what AI tools you use, you may get yourself in trouble - the bigger one, the more power such tools have.

Lightbulb on1

Content you might like

Finding data and putting it to good use13%

Controlling the security and privacy of data45%

Understanding how data is currently being used20%

All of the above19%

None of the above1%

View Results

Scaling the business32%

Preserving existing deals40%

Business reputation56%

Business continuity37%

Security33%

View Results