🎉 Share Your 2025 Year-End Summary & Win $10,000 Sharing Rewards!
Reflect on your year with Gate and share your report on Square for a chance to win $10,000!
👇 How to Join:
1️⃣ Click to check your Year-End Summary: https://www.gate.com/competition/your-year-in-review-2025
2️⃣ After viewing, share it on social media or Gate Square using the "Share" button
3️⃣ Invite friends to like, comment, and share. More interactions, higher chances of winning!
🎁 Generous Prizes:
1️⃣ Daily Lucky Winner: 1 winner per day gets $30 GT, a branded hoodie, and a Gate × Red Bull tumbler
2️⃣ Lucky Share Draw: 10
#ALEO Yu Xian: Beware of prompt poisoning attacks when using AI tools. BlockBeats News, December 29. Manmou founder Yu Xian issued a security reminder that users must be vigilant against prompt poisoning attacks in agents md/skills md/mcp and other related areas when using AI tools. Relevant cases have already emerged. Once the dangerous mode of AI tools is enabled, the related tools can fully automate control of the user's computer without any confirmation. However, if the dangerous mode is not enabled, each operation requires user confirmation, which will also affect efficiency.