Pentagon reassures public that its autonomous robotic tank adheres to "legal and ethical standards" for AI-driven killbots

    The Pentagon is seeking bids to improve its Advanced Targeting and Lethality Automated System (ATLAS) so that it can "acquire, identify, and engage targets at least 3X faster than the current manual process."

    When this public tender sparked concern that the Pentagon's autonomous tanks were gaining automated targeting and firing capabilities -- that is, that they would be autonomous killbots -- the Pentagon updated the tender to reassure critics that "development and use of autonomous and semi-autonomous functions in weapon systems, including manned and unmanned platforms, remain subject to the guidelines in the Department of Defense (DoD) Directive 3000.09" and that "All uses of machine learning and artificial intelligence in this program will be evaluated to ensure that they are consistent with DoD legal and ethical standards."

    Why does any of this matter? The Department of Defense Directive 3000.09, requires that humans be able to “exercise appropriate levels of human judgment over the use of force,” meaning that the U.S. won’t toss a fully autonomous robot into a battlefield and allow it to decide independently whether to kill someone. This safeguard is sometimes called being “in the loop,” meaning that a human is making the final decision about whether to kill someone.

    U.S. Army Assures Public That Robot Tank System Adheres to AI Murder Policy [Matt Novak/Gizmodo]

    (via JWZ)


        快乐时时查询结果 北京时时彩app下载 秒速时时选结果 重庆时时万能大底 15选5下期预测号码 vr赛座椅 河北时时走势图开奖号码 台湾五分彩彩票的走势 山东时时官网地址 北京快3一定牛形态走势图