X has rolled out a significant update to its Grok AI assistant, restricting the tool's ability to process and analyze certain types of images. Specifically, the AI can no longer generate or enable the removal of clothing from pictures of real individuals.



This move reflects growing industry focus on responsible AI development and content moderation. The restriction appears to address privacy and consent concerns that have become increasingly important as AI image processing capabilities advance.

The update demonstrates how major platforms are balancing innovation with ethical guardrails. As Grok continues evolving as a conversational AI tool, such policy adjustments help establish clearer boundaries around sensitive use cases.

For users relying on Grok for other image analysis tasks, the core functionality remains intact—this change specifically targets that particular image manipulation capability.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 6
  • Repost
  • Share
Comment
0/400
LucidSleepwalkervip
· 6h ago
Alright, it's about time to put an end to this mess. If we keep letting it go, everything will be lost.
View OriginalReply0
JustHereForAirdropsvip
· 6h ago
Grok's move this time is pretty decent, finally showing some conscience. But the real question is, when will other platforms catch up...
View OriginalReply0
OnchainUndercovervip
· 6h ago
Hold on, the clothing removal feature is gone? Now those looking to exploit the system will have to change their approach.
View OriginalReply0
MEV_Whisperervip
· 6h ago
Oops, another AI that should bow its head and behave properly. We should have been more cautious about this earlier.
View OriginalReply0
CommunityJanitorvip
· 7h ago
NGL, this move is a bit late; it should have been handled earlier.
View OriginalReply0
ShadowStakervip
· 7h ago
honestly, took them long enough. the real question is whether this is actual guardrails or just theater to dodge regulatory heat. either way, doesn't solve the fundamental issue of how these models get trained in the first place—garbage in, garbage out applies to ethics too tbh
Reply0
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)