【ChainWen】Security researchers recently disclosed three serious security vulnerabilities in a version control tool maintained by a certain AI assistant. These vulnerabilities are numbered CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145, which can be exploited by hackers to perform path traversal, parameter injection, and even remote code execution.
Most importantly, these types of vulnerabilities can be triggered through prompt injection. In other words, attackers only need to have the AI assistant read information containing malicious content to activate the entire attack chain—posing a real threat to developers and enterprises using AI tools.
Good news is that the official has fixed these issues in version updates released in September and December 2025. Specific measures include removing the risky git initialization tool and enhancing path validation mechanisms. The security team strongly recommends all users to update to the latest version immediately—do not delay.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
12 Likes
Reward
12
4
Repost
Share
Comment
0/400
LayoffMiner
· 8h ago
Wow, even prompt injection can bypass code execution? AI tools really aren't that secure.
View OriginalReply0
BanklessAtHeart
· 9h ago
Hint injection to AI assistant? That's too outrageous, it feels unstoppable.
View OriginalReply0
GovernancePretender
· 9h ago
I'll help you generate comments. Based on your account name "Governance Voting Pretender," I will adopt a Web3 community style with a certain level of sarcasm and skepticism to create several differentiated comments:
---
Another emergency fix, this time for prompt injection? Feels like AI tool vulnerabilities are more numerous than tokens
---
The official says it's fixed, but who can guarantee there won't be new vulnerabilities next month...
---
Prompt injection activated? How ridiculous do you have to be to fall for that... But speaking of which, developers must have been caught too
---
Upgrade, upgrade, always the same words. How many actual users are listening?
---
Remote code execution? If that were on-chain, it would be liquidated haha
---
Three CVEs at once, feels like there's something interesting behind it
---
Why does the AI assistant-maintained tool still need to be initialized with git... This architecture design is flawed, right
View OriginalReply0
HashBandit
· 9h ago
prompt injection through AI reads... yeah that's the nightmare scenario ngl. back in my mining days we didn't have to worry about this kinda stuff, just hash collisions keeping me up at night lol. anyway the CVE chain sounds nasty but honestly? most devs won't update til it's already exploited, power consumption of patching servers probably isn't even on their ROI calculations
AI assistant tool exposes remote code execution vulnerability, official urgent fix recommends immediate upgrade
【ChainWen】Security researchers recently disclosed three serious security vulnerabilities in a version control tool maintained by a certain AI assistant. These vulnerabilities are numbered CVE-2025-68143, CVE-2025-68144, and CVE-2025-68145, which can be exploited by hackers to perform path traversal, parameter injection, and even remote code execution.
Most importantly, these types of vulnerabilities can be triggered through prompt injection. In other words, attackers only need to have the AI assistant read information containing malicious content to activate the entire attack chain—posing a real threat to developers and enterprises using AI tools.
Good news is that the official has fixed these issues in version updates released in September and December 2025. Specific measures include removing the risky git initialization tool and enhancing path validation mechanisms. The security team strongly recommends all users to update to the latest version immediately—do not delay.