Artificial intelligence no longer just answers questions: it browses, researches, plans trips, and can act on your behalf inside other applications. Can you imagine that, while it looks for a hotel or replies to your emails, the system finds malicious instructions hidden on a webpage and acts against your interests?
What is prompt injection and why it matters
prompt injection is a form of social engineering aimed at conversational systems. Instead of tricking a person, the attacker writes hidden instructions inside the content the model processes: a review, a comment, an email or a webpage. The goal is to get the AI to do something you didn't ask for, like recommend a house that doesn't meet your criteria or reveal sensitive information.
It sounds like science fiction, but it's very real. Before, conversations were between you and a single agent; today agents combine information from multiple sources. That mix opens new vectors for malicious third parties to try to manipulate the context.
