As autonomous systems become more sophisticated, a critical question emerges: can we trust machines with decisions that determine life and death?



Torsten Reil, co-founder of Helsing, brings a compelling perspective to this ongoing debate. His core argument is straightforward—no matter how advanced the algorithms become, humans must never be removed from the decision-making loop.

This isn't just philosophy. It's about accountability, ethics, and the irreplaceable value of human judgment in scenarios where the stakes couldn't be higher. As warfare and military applications push AI technology into uncharted territory, keeping humans in control remains not just advisable, but essential.

The conversation raises vital questions: Where do we draw the line? How do we balance technological capability with moral responsibility? These aren't problems that code alone can solve.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 9
  • Repost
  • Share
Comment
0/400
zkNoobvip
· 21h ago
No matter how powerful the algorithm is, someone has to back it up; otherwise, who will take the blame?
View OriginalReply0
SerNgmivip
· 01-10 06:44
Human decision-making authority cannot be outsourced, this is beyond question. No matter how powerful the algorithm, it has no morality...
View OriginalReply0
GetRichLeekvip
· 01-09 16:05
Well said, I've been warning about AI killing machines for a long time. On-chain data shows that the hype around military AI has indeed increased, but do you dare to go all in? I definitely don't, it's too black-boxed.
View OriginalReply0
TokenDustCollectorvip
· 01-08 19:00
Life and death decisions still depend on humans; no matter how powerful the algorithm is, it's just a tool. Once you wash your hands of it, who bears the responsibility? That's the core issue.
View OriginalReply0
AirdropSweaterFanvip
· 01-08 18:58
Life-and-death decisions still have to be made by humans; no matter how smart the algorithm is, it's not as valuable as human conscience.
View OriginalReply0
ser_we_are_earlyvip
· 01-08 18:54
Life-and-death decisions should be made by people; otherwise, if something really goes wrong, who will be responsible...
View OriginalReply0
NFTRegretfulvip
· 01-08 18:53
No matter how smart the algorithm is, it can't take the blame for humans, and I agree with that.
View OriginalReply0
TrustlessMaximalistvip
· 01-08 18:50
No matter how smart the algorithm is, someone has to take the blame; that's the key point.
View OriginalReply0
FOMOrektGuyvip
· 01-08 18:49
Honestly, no matter how smart machines are, they only follow routines. How could life-and-death decisions be left to algorithms... Using artificial intelligence for military applications is indeed a bit risky.
View OriginalReply0
View More
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)