文本选自:Scientific American(科学美国人)
作者:Mordechai Rorvig
原文标题:AI Is Getting Powerful. But Can Researchers Make It Principled?
原文发布时间:5 Apr. 2023
AI Is Getting Powerful. But Can Researchers Make It Principled?
Soon after Alan Turing initiated the study of computer science in 1936, he began wondering if humanity could one day build machines with intelligence comparable to that of humans. Artificial intelligence, the modern field concerned with this question, has come a long way since then. But truly intelligent machines that can independently accomplish many different tasks have yet to be invented. And though science fiction has long imagined AI one day taking malevolent forms such as amoral androids or murderous Terminators, today's AI researchers are often more worried about the everyday AI algorithms that already are enmeshed with our lives—and the problems that have already become associated with them.
Even though today's AI is only capable of automating certain specific tasks, it is already raising significant concerns. In the past decade, engineers, scholars, whistleblowers and journalists have repeatedly documented cases in which AI systems, composed of software and algorithms, have caused or contributed to serious harms to humans. Algorithms used in the criminal justice system can unfairly recommend denying parole.
Social media feeds can steer toxic content toward vulnerable teenagers. AI-guided military drones can kill without any moral reasoning. Additionally, an AI algorithm tends to be more like an inscrutable black box than a clockwork mechanism. Researchers often cannot understand how these algorithms, which are based on opaque equations that involve billions of calculations, achieve their outcomes.
人工智能越来越强大,但研究人员能让它遵循原则吗?
精听党背景导读
人工智能在推动网络信息技术发展的同时,模糊了物理现实、数字和个人的界限,也衍生出诸多复杂的法律、伦理问题,我们所要应对的已经不单单是弱人工智能和强人工智能,还有未来的超人工智能问题。
文本选自:Scientific American(科学美国人)作者:Mordechai Rorvig原文标题:AI Is Getting Powerful. But Can Researchers Make It Principled?原文发布时间:5 Apr. 2023关键词:AI 伦理道德 研究
精听党带着问题听
1. 如何理解第二段的“contribute”?2. 文中描述到了哪些担忧?3.“刷微博”用英语可以如何表达?
精听党选段赏析
标题解读
AI Is Getting Powerful. But Can Researchers Make It Principled?
人工智能越来越强大,但研究人员能让它遵循原则吗?
principled
adj. 有原则的;根据规则(或事实)的;
段一
Soon after Alan Turing initiated the study of computer science in 1936, he began wondering if humanity could one day build machines with intelligence comparable to that of humans. Artificial intelligence, the modern field concerned with this question, has come a long way since then. But truly intelligent machines that can independently accomplish many different tasks have yet to be invented. And though science fiction has long imagined AI one day taking malevolent forms such as amoral androids or murderous Terminators, today's AI researchers are often more worried about the everyday AI algorithms that already are enmeshed with our lives—and the problems that have already become associated with them.
soon after 不久之后,稍后;
initiate vt. 开始;发起;
computer science 计算机科学;
humanity n. 人类(总称);1. crime against humanity 违反人道罪;危害人类罪;
intelligence n. 智力,才智;智能;
comparable adj. 可比的,可比较的;1. be comparable to 比得上;犹如;可相提并论;和…相当;
that of …的同类;…的那种东西或事情;
field n. 专业,领域;
concern vt. 与…有关;涉及;1. concern with 关心;涉及;忙于;与…有关;
come a long way 取得进展;有很大进步;
independently adv. 独立地;自立地;1. independent adj. 自主的;有主见的;
accomplish vt. 完成,实现;
invent vt. 发明,创造;编造,虚构;
malevolent adj. 恶毒的;有恶意的;坏心肠的;
form n. 形式;1. take the form of 表现为…的形式;采取…样的形式;以…形式出现;
amoral adj. 与道德无关的;无从区分是非的;超道德的;
android n. 机器人;
murderous adj. 蓄意谋杀的,凶残的;
Terminator n. 终结者;
algorithm n.(尤指计算机)算法,运算法则;1. encryption algorithm 加密算法;
enmesh vt. 使绊住;使陷入;缠住;卷入;1. be enmeshed in difficulties 陷入困难中;
associate vt. 联系;1. associate with 交往;结交;联合;使联系;与…联系在一起;和…来往;
参考译文
1936年,艾伦·图灵开始研究计算机科学,不久之后,他就开始思考人类是否有一天能制造出与人类智能相当的机器。人工智能是这一问题的现代研究领域,自那时以来已经取得了长足进展。但是真正能够独立完成许多不同任务的智能机器还有待开发。尽管科幻小说一直设想人工智能有一天会以邪恶的形式出现,比如是非不分的机器人或凶残的终结者,但现如今人工智能研究人员往往更担心已与我们日常生活交织在一起的人工智能算法,以及与之相关联的问题。
段二
Even though today's AI is only capable of automating certain specific tasks, it is already raising significant concerns. In the past decade, engineers, scholars, whistleblowers and journalists have repeatedly documented cases in which AI systems, composed of software and algorithms, have caused or contributed to serious harms to humans. Algorithms used in the criminal justice system can unfairly recommend denying parole.
be capable of 有…能力的;可…的;
automate vt. 使自动化,自动操作;
scholar n. 学者;有学问的人;
whistleblower n. 告密者,揭发者;检举者;1. blow the whistle 告发;揭发;
repeatedly adv. 反复地;再三地;屡次地;
document vt. 记录,记载(详情)
compose vt. 组成,构成;1. compose of 由…组成,构成;
contribute vi. 增加;增进;添加(到某物);1. contribute to sth. 有助于;促进;促成某事物;对某事有所贡献;2. unemployment contributes to a high crime rate 失业造成了高犯罪率;
criminal justice system 刑事司法体系; 刑事司法制度;
unfairly adv. 不公平地,不公正地;
deny vt. 拒绝承认;拒绝接受;
parole n. 假释;有条件的释放;1. on parole 获得假释;
参考译文
尽管如今的人工智能只能自动完成某些特定的任务,但它已经严重引起人们的担忧。在过去十年中,工程师、学者、检举人以及记者不断记录由软件和算法集成的人工智能系统对人类造成或促成严重伤害的案例。刑事司法系统中算法的使用可能会出现拒绝假释请求的不公正建议。
段三
Social media feeds can steer toxic content toward vulnerable teenagers. AI-guided military drones can kill without any moral reasoning. Additionally, an AI algorithm tends to be more like an inscrutable black box than a clockwork mechanism. Researchers often cannot understand how these algorithms, which are based on opaque equations that involve billions of calculations, achieve their outcomes.
feed n. 提供(意见或信息等);灌输;推送;1. scroll through weibo feeds 刷微博;
steer vt. 引导,指导;
toxic content 有害内容;
vulnerable adj.(身体上或感情上)脆弱的,易受…伤害的;
drone n. 无人驾驶飞机;
moral adj. 道义上的;道德上的;
reasoning n. 推想;推理;理性的观点;论证;1. logical reasoning 逻辑推理;2. reasoning ability 推理能力;
additionally adv. 此外;又,加之;
tend vi. 倾向于,往往会;照顾,护理;走向,趋向;1. tend to 倾向于;常常;归向;趋于;
inscrutable adj. 神秘的;不能预测的;
clockwork adj. 装有发条的;重复的;可预测的;平稳的,有规律的;
mechanism n. 机械装置,机件;
be based on 基于,以…为根据;在…基础上;
opaque adj. 难懂的;模糊的;隐晦的;不清楚的;
equation n. 方程;方程式;等式;1. chemical equation 化学方程;化学反应;2. accounting equation 会计等式;会计方程式;
calculation n. 计算;
outcome n. 结果,效果;
参考译文
社交媒体的推送会将对身心健康有害的内容推给弱势青少年群体。人工智能主导的军用无人机可以在没有经过任何道德推理的情况下杀人。此外,AI算法更像是一个神秘的黑盒子,而不是可预测的运作机制。研究人员常常无法理解,这些算法建立在难懂的方程基础上,涉及数十亿次计算,它们是如何得到所需结果的。
精听党每日单词
principled
/ˈprɪnsəp(ə)l/ adj. 有原则的;根据规则(或事实)的;
soon after
不久之后,稍后;
initiate
/ɪˈnɪʃieɪt/ vt. 开始;发起;
computer science
计算机科学;
humanity
/hjuːˈmænəti/ n. 人类(总称);
intelligence
/ɪnˈtelɪdʒəns/ n. 智力,才智;智能;
comparable
/ˈkɑːmpərəb(ə)l/ adj. 可比的,可比较的;
that of
…的同类;…的那种东西或事情;
field
/fiːld/ n. 专业,领域;
concern
/kənˈsɜːrn/ vt. 与…有关;涉及;
come a long way
取得进展;有很大进步;
independently
/ˌɪndɪˈpendəntli/ adv. 独立地;自立地;
accomplish
/əˈkɑːmplɪʃ/ vt. 完成,实现;
invent
/ɪnˈvent/ vt. 发明,创造;编造,虚构;
malevolent
/məˈlevələnt/ adj. 恶毒的;有恶意的;坏心肠的;
form
/fɔːrm/ n. 形式;
amoral
/ˌeɪˈmɔːrəl/ adj. 与道德无关的;无从区分是非的;超道德的;
android
/ˈændrɔɪd/ n. 机器人;
murderous
/ˈmɜːrdərəs/ adj. 蓄意谋杀的,凶残的;
Terminator
/ˈtɜːrməˌnetər/ n. 终结者;
algorithm
/ˈælɡərɪðəm/ n.(尤指计算机)算法,运算法则;
enmesh
/ɪnˈmeʃ/ vt. 使绊住;使陷入;缠住;卷入;
associate
/əˈsoʊsieɪt/ vt. 联系;
be capable of
有…能力的;可…的;
automate
/ˈɔːtəmeɪt/ vt. 使自动化,自动操作;
scholar
/ˈskɑːlər/ n. 学者;有学问的人;
whistleblower
/ˈhwɪsəlˌbloər/ n. 告密者,揭发者;检举者;
repeatedly
/rɪˈpiːtɪdli/ adv. 反复地;再三地;屡次地;
document
/ˈdɑːkjumənt/ vt. 记录,记载(详情)
compose
/kəmˈpoʊz/ vt. 组成,构成;
contribute
/kənˈtrɪbjuːt/ vi. 增加;增进;添加(到某物);
criminal justice system
刑事司法体系; 刑事司法制度;
unfairly
/ˌʌnˈferli/ adv. 不公平地,不公正地;
deny
/dɪˈnaɪ/ vt. 拒绝承认;拒绝接受;
parole
/pəˈroʊl/ n. 假释;有条件的释放;
feed
/fiːd/ n. 提供(意见或信息等);灌输;推送;
steer
/stɪr/ vt. 引导,指导;
toxic content
有害内容;
vulnerable
/ˈvʌlnərəb(ə)l/ adj.(身体上或感情上)脆弱的,易受…伤害的;
drone
/droʊn/ n. 无人驾驶飞机;
moral
/ˈmɔːrəl/ adj. 道义上的;道德上的;
reasoning
/ˈriːzənɪŋ/ n. 推想;推理;理性的观点;论证;
additionally
/əˈdɪʃənəli/ adv. 此外;又,加之;
tend
/tend/ vi. 倾向于,往往会;照顾,护理;走向,趋向;
inscrutable
/ɪnˈskruːtəb(ə)l/ adj. 神秘的;不能预测的;
clockwork
/ˈklɑːkwɜːrk/ adj. 装有发条的;重复的;可预测的;平稳的,有规律的;
mechanism
/ˈmekənɪzəm/ n. 机械装置,机件;
be based on
基于,以…为根据;在…基础上;
opaque
/oʊˈpeɪk/ adj. 难懂的;模糊的;隐晦的;不清楚的;
equation
/ɪˈkweɪʒ(ə)n/ n. 方程;方程式;等式;
calculation
/ˌkælkjuˈleɪʃ(ə)n/ n. 计算;
outcome
/ˈaʊtkʌm/ n. 结果,效果;
精听党文化拓展
2022年4月25日,中国发展研究基金会《人工智能时代的伦理:关系视角的审视》报告发布会线上举办,中国发展研究基金会副秘书长俞建拖介绍报告主要内容,多名相关专家学者以报告为基础,围绕上述议题展开深入探讨。
报告发布会上,北京大学哲学系教授何怀宏指出人工智能是一种区别于传统人造物的机器,其能力对人类而言还有很多未被发掘的部分,我们应该进一步思考人和人工智能这一特殊关系,思考如何促进人工智能发展、人工智能如何给人类赋能、如何深入具体地将人工智能运用到各个领域、以及人工智能的运用会带来哪些新的问题。
中国社会科学院哲学所科技哲学研究员、中国社会科学院科学技术和社会研究中心主任段伟文探讨了人工智能伦理治理的主体问题。他指出,当前实践人工智能领域的伦理规范主要是以科技公司为主体,政府也应积极的参与到该领域的伦理治理中,扮演积极的角色,更加灵活地进行适应性治理。
暨南大学教授、海国图智研究院院长陈定定认为当前社会的主要问题是伦理规范泛滥和冲突,应该建立一个通用的、全面的人工智能伦理规范。
武汉大学计算机学院教授、卓尔智联研究院执行院长蔡恒进指出,未来三五年内人工智能会有重大突破,元宇宙和Web3.0可以看作是人工智能的重要进展。在Web3.0时代,个体、企业、国家将会成为机器节点并融合为超级智能,这有可能会对社会伦理关系产生影响。
北京大学法学院副教授、北京大学法律人工智能研究中心副主任江溯指出,随着智能性技术广泛使用,社会可能会慢慢变成“全景敞视监狱”,个人的自由空间可能会被压缩。我们应该在法律领域探讨相关问题,研判人工智能应用的法律限度并加以约束。
精听党每日美句
The person who avoids the reality will be more undesirable in the future.
回避现实的人,未来将更不理想。