Which pitfalls—model bias, false positives/negatives, data quality, regulatory constraints—often impede AI-based security tools, and how can they be mitigated in a financial-services context?

4.5k viewscircle icon4 Upvotescircle icon3 Comments
Sort by:
Director of Information Security in Finance (non-banking)2 months ago

I wrote a blog post about this topic:
https://www.ismc.at/?p=76

Director2 months ago

Data, especially biased data, is a huge concern. Companies just starting should consider "synthetic data" to test the integrity of their AI. Another pitfall not mentioned is worker bias towards AI, will they use it in the first place?

Lightbulb on1
VP of AI Innovation in Software2 months ago

First and foremost, I would not recommend going down to model level when trying to implement digital security. Use the tools where much of low-level issues you mention are being addressed by tool's product team. Basically, don't build - let others do it right, and use the result of that work.

And for data quality - AI specifically relies on prompts, most of the businesses have data landscape very conductive to contain prompt injections. That must be addressed with great attention, otherwise no matter what AI tools you use, you may get yourself in trouble - the bigger one, the more power such tools have.

Lightbulb on1

Content you might like

All active users4%

Most active users40%

Half of active users48%

Less than half of active users5%

Only a few active users

None

View Results

Significantly increase11%

Moderately increase58%

Neither (no change)24%

Moderately decrease4%

Significantly decrease1%

Unsure

View Results