If your org’s using any virtual assistants with AI capabilities, are you concerned about indirect prompt injection attacks?

Extremely concerned — it’s a major risk18%

Somewhat concerned — it's a potential risk69%

Mildly concerned — it’s on my radar9%

Not particularly concerned — I doubt we’ll be impacted2%

341 PARTICIPANTS
3.7k viewscircle icon2 Comments
Sort by:
Chief Data Scientist in IT Servicesa year ago

This has been a risk for as long as IT systems have been around. I feel like were using the word prompt to talk about Generative AI solutions, but there is alot of solutions that based on Conversational AI. But any solutions that has access to your back end and integration is a risk of attack.

Board Member, Advisor, Executive Coach in Software2 years ago

What many dont realize is that AI or more accurately ML models themselves are not protected - whether it be a model used as a virtual assistant or an ML model used in a trading platform in the financial industry or an ML model embedded in an application like a CRM or even your security tools.  So we should be asking a a much broader question on the risks any ML model poses to our organizations and our customers

Content you might like

Yes, if followed correctly.39%

Unsure38%

No, there is still a significant risk.19%

Other (please tell us in the comments)3%

View Results

HashiCorp (Terraform, Vault, Packer, etc.)22%

Cloud infra automation (Ansible, Puppet, Chef, etc.)56%

APM (Datadog, AppD, SignalFX, NewRelic, etc.)10%

Others?10%

View Results