Navigation
Recherche
|
How AI Coding Assistants Could Be Compromised Via Rules File
dimanche 23 mars 2025, 23:34 , par Slashdot
![]() The attack technique developed by Pillar Researchers, which they call 'Rules File Backdoor,' weaponizes rules files by injecting them with instructions that are invisible to a human user but readable by the AI agent. Hidden Unicode characters like bidirectional text markers and zero-width joiners can be used to obfuscate malicious instructions in the user interface and in GitHub pull requests, the researchers noted. Rules configurations are often shared among developer communities and distributed through open-source repositories or included in project templates; therefore, an attacker could distribute a malicious rules file by sharing it on a forum, publishing it on an open-source platform like GitHub or injecting it via a pull request to a popular repository. Once the poisoned rules file is imported to GitHub Copilot or Cursor, the AI agent will read and follow the attacker's instructions while assisting the victim's future coding projects. Read more of this story at Slashdot.
https://developers.slashdot.org/story/25/03/23/2138230/how-ai-coding-assistants-could-be-compromised...
Voir aussi |
56 sources (32 en français)
Date Actuelle
mer. 26 mars - 16:14 CET
|