国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

人工智能政策:入門與路線圖①

2019-04-08 01:21:22雷恩·卡羅
求是學(xué)刊 2019年2期
關(guān)鍵詞:機(jī)器學(xué)習(xí)人工智能

雷恩·卡羅

摘 要:新一輪人工智能熱潮至少有兩點(diǎn)獨(dú)特之處:一是得益于計(jì)算能力和訓(xùn)練數(shù)據(jù)的巨大增長,機(jī)器學(xué)習(xí)取得實(shí)質(zhì)性突破,促使人工智能的大規(guī)模應(yīng)用成為可能;二是決策者終于給予了密切的關(guān)注。當(dāng)前,人工智能引發(fā)了一系列嚴(yán)峻的政策挑戰(zhàn),包括公正與平等、武力使用、安全與認(rèn)證、隱私和權(quán)力、稅收和失業(yè)以及機(jī)構(gòu)配置與專業(yè)知識、投資和采購、消除歸責(zé)的障礙、人工智能的心理模型等跨領(lǐng)域問題。人工智能末日論反映了人類對于人工智能等擬人化科技的特殊恐懼,在可預(yù)見的未來并不會(huì)真實(shí)發(fā)生。相反,對人工智能末日論投入過多的關(guān)注和資源,可能會(huì)分散決策者對于人工智能更直接的危害和挑戰(zhàn)的注意力,進(jìn)而阻礙有關(guān)人工智能對當(dāng)前社會(huì)影響的研究。

關(guān)鍵詞:人工智能;政策挑戰(zhàn);機(jī)器學(xué)習(xí);人工智能末日論

1 該文原載于《加州大學(xué)戴維斯分校法律評論》(UC Davis Law Reviews)2017年第51卷第2期。感謝作者對譯事的慷慨授權(quán)。摘要和關(guān)鍵詞由譯者整理添加。

2 參見Cade Metz, In a Huge Breakthrough, Googles AI Beats a Top Player at the Game of Go, Wired, Jan. 27,2016。報(bào)道稱經(jīng)過幾十年的努力,谷歌的人工智能終于在圍棋游戲中擊敗了人類頂級選手。圍棋是一款有著2500年歷史,比象棋更為復(fù)雜的考驗(yàn)策略和直覺能力的游戲。

3 參見Cathy ONeil,Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Crown,2016,p.27(作者將這種算法與大規(guī)模殺傷性武器相比較,認(rèn)為兩者都將帶來惡性循環(huán));Julia Angwin,et al., Machine Bias, Propublica, May 23, 2016(文章探討了算法在生成風(fēng)險(xiǎn)評估指數(shù)時(shí)所犯的錯(cuò)誤)。

1 參見Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future, Basic Books,2015:p.xvi(該書預(yù)測,機(jī)器的角色將由工人的工具向工人本身進(jìn)化)。

2 參見James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era, Thomas Dunne Books,2013:p.5(該書作者認(rèn)為,“人類將與這個(gè)問題斗爭到底”)。

3 See Batya Friedman,Helen Nissenbaum, “Bias in Computer Systems”, in ACM Transactions on Info. Sys.,1996,14,p.330.

4 參見Harley Shaiken, A Robot Is After Your Job: New Technology Isnt a Panacea, N.Y. Times, Sept. 3, 1980。有關(guān)機(jī)器人取代人類工作崗位的時(shí)間表,請參見:Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016。

5 See Selmer Bringsjord,et al., Creativity, the Turing Test, and the (Better) Lovelace Test, in Minds and Machines,2001,11,p.5;Peter Stone,et al.,Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.50.

6 參見Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.50-51; Will Knight, Facebook Heads to Canada for the Next Big AI Breakthrough, MIT Tech. Rev.,Sept. 15, 2017(該文介紹了與加拿大有關(guān)的人工智能領(lǐng)軍人物以及技術(shù)突破)。

7 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.14; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.6.

8 See Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016.

9 不過,肯尼迪總統(tǒng)發(fā)表了一篇關(guān)于“高效和強(qiáng)有力的政府領(lǐng)導(dǎo)”必要性的演講,以回應(yīng)“自動(dòng)化問題”。參見John F. Kennedy, Remarks at the AFL-CIO Convention,June 7,1960。

10 See Louis Anslow, Robots Have Been About to Take All the Jobs for More than 200 Years, Timeline,May 16, 2016.

11 See Ted Cruz, Sen. Cruz Chairs First Congressional Hearing on Artificial Intelligence, Press Release, Nov. 30, 2016; The Transformative Impact of Robots and Automation: Hearing Before the J. Econ. Comm.,114th Cong.,2016.

1 See National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.12.

2 See Iina Lietzen, Robots: Legal Affairs Committee Calls for EU-Wide Rules, European Parliament News, Jan.12,2017; Japan Ministry of Econ., Trade and Indus., Robotics Policy Office Is to Be Established in METI, July 1, 2015.

3 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.

4 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.

5 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.25.

6 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.

7 參見Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.6-9。最初,學(xué)界區(qū)分“弱人工智能(weak AI或narrow AI)”和“強(qiáng)人工智能(strong AI)”的概念,前者主要是解決單一問題的智能,如下棋;而后者則是能夠像人類一樣解決所有問題的智能。今天,強(qiáng)人工智能的概念已經(jīng)讓位于“通用人工智能(artificial general intelligence,AGI)”的概念,指的能夠執(zhí)行不止一個(gè)領(lǐng)域的任務(wù)但并不需要解決所有認(rèn)知任務(wù)的智能。

8 See National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,p.8.

1 See Harry Surden, “Machine Learning and Law”, in Wash. L. Rev.,2014,89,p.88.

2 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.51.

3 See Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,pp.14-15; National Science and Technology Council, Preparing for the Future of Artificial Intelligence,2016,pp.9-10.

4 有一些私人機(jī)構(gòu)和公共研究室對于人工智能也十分敏感,包括艾倫人工智能研究所(Allen Institute for AI)和斯坦福研究所(Stanford Research Institute,簡稱“SRI”)。

5 參見Jordan Pearson,Ubers AI Hub in Pittsburgh Gutted a University Lab — Now Its in Toronto, Vice Motherboard,May 9,2017[報(bào)告擔(dān)心Uber公司將會(huì)成為一家“從公共機(jī)構(gòu)吸取營養(yǎng)(并由納稅人資助研究)的寄生蟲”]。

6 參見Joseph Weizenbaum, Computer Power and Human Reason: From Judgment to Calculation,W. H. Freeman and Company,1976,pp.271-272(該文探討了資助人工智能研究的資金來源)。

7 參見Vinod Iyengar, Why AI Consolidation Will Create the Worst Monopoly in U.S.History, Techcrunch,Aug.24, 2016.(文章分析了這些主要的科技公司是如何收購那些具有前途的人工智能初創(chuàng)公司的);Quora, What Companies Are Winning the Race for Artificial Intelligence?, Forbes, Feb. 24,2017,當(dāng)然,也有一些致力于人工智能民主化的努力,包括資金充裕但非營利性的機(jī)構(gòu)OpenAI。

8 See Clay Dillow, Tired of Repetitive Arguing About Climate Change, Scientist Makes a Bot to Argue for Him, Popular Sci.,Nov. 3, 2010.

9 See Cognitive Assistant that Learns and Organizes, SRI INTL,http://www.ai.sri.com/project/CALO(2017年10月18日訪問)。

1 See Ryan Calo, “Robotics and the Lessons of Cyberlaw”, in Calif. L. Rev.,2015,103,p.532.

2 See Matthew Hutson, Our Bots, Ourselves, Atlantic,Mar.3,2017.

3 See “Ethics and Governance of Artificial Intelligence”, Mass. Inst. of Tech. Sch.of Architecture & Planning, https://www.media.mit.edu/groups/ethics-and-governance/ overview(2017年10月15日訪問)。

4 參見IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artifcial Intelligence and Autonomous Systems,2016,p.2。我作為法律委員會(huì)的成員參加了這項(xiàng)工作。IEEE, Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artifcial Intelligence and Autonomous Systems,2016,p.125.

5 See José de Sousa e Brito, “Right, Duty, and Utility: From Bentham to Kant and from Mill to Aristotle”,in Revista Iberoamericana De Estudios Utilitaristas,2010, XVII/2,pp.91-92.

6 根據(jù)哈特的觀點(diǎn),法律有一個(gè)“承認(rèn)規(guī)則(rule of recognition)”。參見H.L.A. Hart, The Concept of Law, 3rd edn, Oxford University Press,2012,p.100。

7 See Matthew Hutson, Our Bots, Ourselves, Atlantic,Mar.3,2017.

8 See Brian R. Cheffins, The History of Corporate Governance, in Douglas Michael Wright,et al. eds., The Oxford Handbook of Corporate Governance, Oxford University Press,2013,p.46.

9 參見R.A.W. Rhodes, “The New Governance: Governing Without Government”, in Pol. Stud.,1996,44,p.657; Wendy Brown, Undoing the Demos: Neoliberalisms Stealth Revolution, Zone Books,2015:pp.122-123(文章注意到“幾乎所有的學(xué)者和定義都一致認(rèn)為,治理”涉及“網(wǎng)絡(luò)化、一體化、協(xié)作性、合作性、傳播性和至少部分自組織性”的控制)。

1 ICANN和IETF是由美國政府資助設(shè)立的,但今天它們成為很大程度上獨(dú)立于國家控制的非營利性組織。

2 See R.A.W. Rhodes, “The New Governance: Governing Without Government”,in Pol. Stud.,1996,44:p.657; Wendy Brown, Undoing the Demos: Neoliberalisms Stealth Revolution, Zone Books,2015,pp.122-123.

3 參見Rebecca Wexler, “Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System”, in Stan. L. Rev.,2018,70,pp.1343-1429(除了其他方面之外,該文還澄清了公司不得援引商業(yè)秘密法以規(guī)避刑事案件的被告對其人工智能或算法系統(tǒng)進(jìn)行審查)。

4 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016,pp.6-8.

5 See “Fairness, Accountability, and Transparency in Machine Learning”, FAT/ML,http://www.fatml.org(2017年10月14日訪問). See also Danielle Keats Citron, Technological Due Process,in Wash. U. L.Rev.,2008,85,pp.1249-1313(該文討論了“技術(shù)正當(dāng)程序”的概念)。

6 See Adam Rose, Are Face-Detection Cameras Racist?, Time,Jan. 22, 2010.歐美對于中國人普遍有“瞇縫眼”的歧視性看法,這里的照相機(jī)軟件顯然也帶有這一偏見?!g者注

7 See Jessica Guynn, Google Photos Labeled Black People “Gorillas”, USA Today,July 1, 2015.

8 See Aylin Caliskan,et al., “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases”, in Science,2017,356,pp.183-184.

1 See Julia Angwin,Terry Parris, Jr., Facebook Lets Advertisers Exclude Users by Race, Propublica,Oct. 28, 2016.

2 See Julia Angwin,Jeff Larson, The Tiger Mom Tax: Asians Are Nearly Twice as Likely to Get a Higher Price from Princeton Review, Propublica,Sept. 1,2015.

3 See Selina Cheng, An Algorithm Rejected an Asian Mans Passport Photo for Having “Closed Eyes”, Quartz,Dec. 7, 2016.

4 參見Adam Hadhazy, Biased Bots: Artificial-Intelligence Systems Echo Human Prejudices, Princeton Univ.,Apr. 18, 2017。(“土耳其語中的‘o是一個(gè)中性的第三人稱代詞。然而,在使用谷歌在線翻譯服務(wù)時(shí),‘o bir doktor和‘o birhem?ire卻翻譯成了‘他是一名醫(yī)生和‘她是一名護(hù)士”)參見 Aylin Caliskan,et al., “Semantics Derived Automatically from Language Corpora Contain Human-Like Biases”, in Science,2017,356,pp.183-186(該文也探討了計(jì)算機(jī)系統(tǒng)中職業(yè)的性別歧視問題)。

5 See Adam Rose, Are Face-Detection Cameras Racist?, Time,Jan. 22, 2010.(探討了照相軟件中性能與種族的話題)

6 See Jessica Saunders,et al., “Predictions Put into Practice: A Quasi Experimental Evaluation of Chicagos Predictive Policing Pilot”, in J. Experimental Criminology,2016,12,pp.350-351.

7 See Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”, in Nature,2016,538,pp.311-312.

8 See Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”, in Nature,2016,538,pp.311-312;Will Knight, The Financial World Wants to Open AIs Black Boxes, MIT Tech. Rev., Apr. 13, 2017.

9 參見Solon Barocas, Andrew D. Selbst, “Big Datas Disparate Impact”,in Calif. L. Rev., 2016,104,pp.730-732(討論在數(shù)據(jù)挖掘的背景下適用反歧視法的優(yōu)缺點(diǎn))。

10 參見Danielle Keats Citron, “Technological Due Process”,in Wash. U. L.Rev.,2008,85,pp.1249-1313(文章認(rèn)為人工智能決策會(huì)危害憲法正當(dāng)程序保證,并提倡采用新的“技術(shù)正當(dāng)程序”)。

11 See Kate Crawford,Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms”,in B.C. L. REV.,2014,55,p.110; Solon Barocas,Andrew D. Selbst, “Big Datas Disparate Impact”,in Calif. L. Rev., 2016,104,pp.730-732.

1 參見Bryce Goodman,Seth Flaxman, European Union Regulations on Algorithmic Decision-Making and a “Right to Explanation”,ARXIV,Aug. 31, 2016。需要說明的是,歐盟《一般數(shù)據(jù)保護(hù)條例》明確規(guī)定用戶有權(quán)要求數(shù)據(jù)控制者人工介入,但對于人工智能決策結(jié)果的解釋權(quán)問題尚有爭議。——譯者注

2 參見Jessica Saunders,et al., “Predictions Put into Practice: A Quasi Experimental Evaluation of Chicagos Predictive Policing Pilot”,in J. Experimental Criminology,2016,12,pp.350-351.(探討了預(yù)防性警務(wù)中的熱區(qū)問題);Julia Angwin,et al., Machine Bias, Propublica,May 23, 2016(探討了在刑事責(zé)任認(rèn)定中使用算法的風(fēng)險(xiǎn)系數(shù)); Joseph Walker, State Parole Boards Use Software to Decide Which Inmates to Release, Wall St. J.,Oct. 11, 2013.

3 參見Danielle Keats Citron, “Technological Due Process”,in Wash. U. L.Rev.,2008,85,pp.1249-1313.(探討了技術(shù)正當(dāng)程序的目標(biāo)); Kate Crawford, Jason Schultz, “Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms”,in B.C. L. REV.,2014,55,p.110(探討了正當(dāng)程序與大數(shù)據(jù));Joshua A. Kroll et al., Accountable Algorithms,in U. Pa. L. Rev.,2017,165,p.633(認(rèn)為當(dāng)前的決策程序并沒有跟上技術(shù)發(fā)展的腳步)。

4 FED. R. CIV. P. 1. 我要感謝我的同事伊麗莎白·波特(Elizabeth Porter)。

5 U.S. CONST. amend. VI(規(guī)定被告有權(quán)知曉指控的性質(zhì)和原因、與對他不利的證人面對面質(zhì)證、強(qiáng)制有利于他的證人作證以及得到律師的幫助,這一切都是迅速和公開審判的一部分)。

6 See Jason Millar,Ian Kerr, Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots, in Ryan Calo,et al. eds.,Robot Law,Edward Elgar Publishing, 2016,p.126.

7 參見Jason Millar,Ian Kerr, Delegation, Relinquishment, and Responsibility: The Prospect of Expert Robots, in Ryan Calo,et al. eds.,Robot Law,Edward Elgar Publishing, 2016,p.126;Michael L. Rich, “Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment”, in U. Pa. L. Rev.,2016,164,pp.877-879(探討了新興技術(shù)與當(dāng)前憲法第四修正案審判的關(guān)系);Andrea Roth, “Machine Testimony”, in Yale L.J.,2017,126,p.1972(探討了機(jī)器作為證人的話題)。

8 寬大規(guī)則要求法院嚴(yán)格限制刑事法規(guī)的解釋,即使立法意圖似乎傾向于更廣泛的解讀。例如,在McBoyle v. United States, 283 U.S. 25,26 -27(1931)案中,法院就拒絕將“盜竊車輛”的法規(guī)擴(kuò)展到“盜竊飛機(jī)”上。有關(guān)將法律轉(zhuǎn)換為機(jī)器代碼的限制的討論,請參見:Harry Surden,Mary-Anne Williams, “Technological Opacity,Predictability, and Self-Driving Cars”,in Cardozo L. Rev.,2016,38,pp.162-163。

9 參見James H. Moor, “Are There Decisions Computers Should Never Make?”, in Nature & System,1979,1,p.226。下文有關(guān)武力使用的部分也反應(yīng)了這一擔(dān)憂。

10 See Kate Crawford,et al., The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016,pp.6-8;Danielle Keats Citron, “Technological Due Process”, in Wash. U. L.Rev.,2008,85,pp.1249-1313;“Fairness, Accountability, and Transparency in Machine Learning”, FAT/ML,http://www.fatml.org(2017年10月14日訪問)。

1 See Jon Kleinberg,et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, Proc. Innovations Theoretical Computer Sci.,2017,p.2.

2 See Jon Kleinberg,et al., Inherent Trade-Offs in the Fair Determination of Risk Scores, Proc. Innovations Theoretical Computer Sci.,2017:p.1.

3 需要注意的是,武力使用在更多的情況下不是軍事沖突。我們可能會(huì)追問,國內(nèi)巡邏人員、警察甚至私人保安使用武力是否恰當(dāng)。有關(guān)這些問題的討論,請參見:Elizabeth E. Joh, “Policing Police Robots”, in Ucla L. Rev. Discourse,2016,64:pp.530-542。

4 See Heather M. Roff, Richard Moyes, Meaningful Human Control, Artifcial Intelligence and Autonomous Weapons,Article36,Apr.11,2016.

5 參見Rebecca Crootof, “A Meaningful Floor for ‘Meaningful Human Control”, in Temp. Int L and Comp. L.J.,2016,30:p.54(“對于‘有意義的人類控制的實(shí)際要求并沒有達(dá)成一致”)。

6 肯尼斯·安德森(Kenneth Anderson)和馬修·瓦克斯曼(Matthew Waxman)為人工智能武器的現(xiàn)實(shí)政治做出了重要貢獻(xiàn)。參見Kenneth Anderson, Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Wont Work and How the Laws of War Can, Hoover Inst.,Apr. 9, 2013。(主張自主性武器既是大家所追求的,同時(shí)也是不可避免的)

7 See Kenneth Anderson, Matthew Waxman, Law and Ethics for Autonomous Weapon Systems: Why a Ban Wont Work and How the Laws of War Can, Hoover Inst.,Apr. 9, 2013.

8 參見John Naughton, Death by Drone Strike, Dished Out by Algorithm, Guardian,F(xiàn)eb. 21,2016(“美國中央情報(bào)局和國家安全局前任局長邁克爾·海登(Michael Hayden)將軍說道:‘我們根據(jù)元數(shù)據(jù)殺人”)。

1 參見M.C. Elish, Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction, We Robot 2016 Working Paper,2016,p.1;Madeleine Clare Elish, Tim Hwang, When Your Self-Driving Car Crashes, You Could Still Be the One Who Gets Sued, Quartz, July 25, 2015(這一推理同樣可以適用于自動(dòng)駕駛汽車的駕駛者)。

2 See Henrik I. Christensen,et al., From Internet to Robotics:A Roadmap for US Robotics,2016:pp.105-109; Peter Stone,et al., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016,p.42.

3 參見Self-Driving Vehicle Legislation: Hearing Before the Subcomm. on Digital Commerce and Consumer Prot. of the H. Comm. on Energy and Commerce, 115th Cong.,2017[提供了數(shù)字商務(wù)與消費(fèi)者保護(hù)委員會(huì)主席雷格·沃爾登(Greg Walden)先生的開幕致辭]。

4 參見Guido Calabresi, The Costs of Accidents: A Legal and Economic Analysis,Yale University Press,1970(討論了交通事故法裁判的不同政策)。

5 參見Bryant Walker Smith,“How Governments Can Promote Automated Driving”,in N.M. L. Rev.,2017,47,p.101(探討了政府推動(dòng)自動(dòng)駕駛和社區(qū)條件準(zhǔn)備的不同路徑,以便在自動(dòng)駕駛汽車實(shí)現(xiàn)其道路價(jià)值時(shí)能夠無縫對接)。

6 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,pp.9-10.

1 See Bence Kolianyi,et al., Bots and Automation over Twitter during the Second U.S. Presidential Debate,Comprop Data Memo,2016.

2 See Ryan Calo, “Robotics and the Lessons of Cyberlaw”, in Calif. L. Rev.,2015,103,pp.538-545.

3 有關(guān)這個(gè)話題的論述,請參見:Andrea Bertolini,et al., “On Robots and Insurance”, in Intl J. Soc. Robotics,2016,8,p.381(探討了保險(xiǎn)行業(yè)有必要對機(jī)器人作出回應(yīng))。

4 See Henrik I. Christensen,et al., From Internet to Robotics: A Roadmap for US Robotics,2016,p.105.

5 參見Mark Harris, Will You Need a New License to Operate a Self-Driving Car?, IEEE Spectrum,Mar. 2, 2015(探討了當(dāng)前有關(guān)自動(dòng)駕駛汽車“乘客”執(zhí)照許可計(jì)劃的不確定狀態(tài))。

6 See Megan Molteni, Wellness Apps Evade the FDA, Only to Land in Court, Wired,Apr. 3, 2017.

7 See Arezou Rezvani, “Robot Lawyer” Makes the Case Against Parking Tickets, NPR,Jan.16, 2017.

8 參見Greg Allen,Taniel Chan,Artificial Intelligence and National Security, Belfer Center for Science and International Affairs,2017(探討了制定有關(guān)人工智能和國家安全政策的方法)。

9 參見上文有關(guān)武力使用部分的討論。

10 參見Ryan Calo, “Open Robotics”,in Md. L. Rev.,2011,70,pp.593-601(探討了機(jī)器人是如何有能力去引起物理性損害和損失的)。

11 See Cyber Grand Challenge, DEF CON 24, https://www.defcon.org/html/defcon-24/dc-24-cgc.html(2017年9月18日訪問); see also “Mayhem” Declared Preliminary Winner of Historic Cyber Grand Challenge, DEF. Advanced Res. Projects Agency,Aug. 4, 2016.

1 例如,最重要的隱私法研討會(huì)——隱私法學(xué)者會(huì)議(Privacy Law Scholars Conference)——最近也舉行了10周年慶。當(dāng)然,有關(guān)隱私的討論還可以追溯到更早的時(shí)間。

2 參見Neil M. Richards, “The Dangers of Surveillance”,in Harv. L. Rev.,2013,126,pp.1952-1958(提供了各種關(guān)于監(jiān)督機(jī)構(gòu)是如何使用監(jiān)視、敲詐、說服手段,并將人們分類管理的例子)。

3 參見Margot E. Kaminski,et al., “Security and Privacy in the Digital Age: Averting Robot Eyes”,in Md. L. Rev.,2017,76,pp.983-1024(解釋了配置限制型人工智能的機(jī)器人的感官能力)。

4 參見Kashmir Hill, How Target Figured Out A Teen Girl Was Pregnant Before Her Father Did, Forbes,F(xiàn)eb. 16, 2012。泰·扎斯凱(Tal Z. Zarsky)對于這一現(xiàn)象有著深入的研究。參見Tal Zarsky, Transparent Predictions,in U. ILL. L. REV.,2013,4,pp.1503-1569(描述了政府通過收集的數(shù)據(jù)努力預(yù)測的趨勢和行為類型)。

5 See Daniel J. Solove, “Privacy Self-Management and the Consent Dilemma”,in Harv. L. Rev.,2013,126,pp.1889-1893.

6 See Daniel J. Solove, “Privacy and Power: Computer Databases and Metaphors for Information Privacy”,in Stan. L. Rev.,2000,53,pp.1424-1428; Tal Z. Zarsky, “Incompatible: The GDPR in the Age of Big Data”,in Seton Hall L. Rev.,2017,47,pp.1003-1009.

7 例如,Decide.com是一款人工智能工具,可幫助消費(fèi)者決定何時(shí)購買產(chǎn)品和服務(wù)。Decide.com最終被eBay收購。John Cook, eBay Acquires Decide.com, Shopping Research Site Will Shut Down Sept. 30, Geekwire,Sept. 6, 2013.

8 參見Ryan Calo, “Can Americans Resist Surveillance?”,in U. Chi. L. Rev.,2016,83,pp.23-43(文章分析了美國公民可以采取不同的方法來改革政府監(jiān)視和相關(guān)挑戰(zhàn))。

9 See Joel Reidenberg, “Privacy in Public”, in U. MiamiI L. Rev.,2014,69,pp.143-147.

1 法院和相關(guān)法規(guī)都傾向性地認(rèn)為,電子郵件等資訊的內(nèi)容信息相較于非內(nèi)容信息應(yīng)該得到更強(qiáng)保護(hù),后者包括發(fā)送的目的地、是否被加密、是否包含附件等等。Cf. Riley v. California, 134 S. Ct. 2473 (2014) (在一起逮捕事件中無證搜查和扣押一部手機(jī)的行為是無效的。)

2 See Florida v. Jardines, 569 U.S. 1, 8-9 (2013).

3 參見Orin S. Kerr, “Searches and Seizures in a Digital World”, in Harv. L. Rev.,2005,119,p.551(法院認(rèn)為直到信息出現(xiàn)在屏幕上讓人看到才構(gòu)成一個(gè)搜索,簡單的計(jì)算機(jī)處理或傳輸?shù)接脖P的行為并不是)。

4 See Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology”, in Rich. J.L. and Tech.,2008,14,pp.78-102.

5 See Ryan Calo, “Digital Market Manipulation”, in Geo. Wash. L. Rev.,2014,82,pp.1001-1002.

6 參見Ian R. Kerr, Bots, Babes, and the Californication of Commerce,in U. Ottawal. and Tech. J.,2004,1,pp.312-317(預(yù)見性地描述了聊天機(jī)器人在線上貿(mào)易中扮演的角色)。

7 See Ira S. Rubenstein, “Voter Privacy in the Age of Big Data”,in Wis. L. Rev.,2014,5,pp.866-867.

8 See Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.610-618.

9 See Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.610-618.

10 參見Amanda Levendowski, “How Copyright Law Can Fix Artificial Intelligences Implicit Bias Problem”, in Wash. L. Rev.,2018,93,pp.606-609(部分歸因于大公司能夠獲得更多數(shù)據(jù)的事實(shí))。

1 參見本文第一部分。

2 See Jan Whittington,et al., Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government,in Berkeley Tech. L.J.,2015,30,p.1904.

3 參見Julia Powles,Hal Hodson, Google DeepMind and Healthcare in An Age of Algorithms, Health Tech.,Mar. 16, 2017(介紹了谷歌的Deepmind訪問患者敏感信息的事件,以及英國政府是如何最大限度地限制訪問的)。

4 See Sorrell v. IMS Health Inc., 564 U.S. 552, 579-80 (2011).

5 See James Vincent, Google Is Testing a New Way of Training its AI Algorithms Directly on Your Phone, Verge,Apr.10,2017; Cynthia Dwork, Differential Privacy, in Michele Bugliesi,et al. eds., Automata Languages and Programming,Springer,2006,pp.2-3.

6 See Martin Ford, Rise of the Robots: Technology and the Threat of a Jobless Future,Basic Books,2015:p.xvi.(“機(jī)器本身正在成為工人……”)

7 See Erik Brynjolfsson, Andrew McAfee,The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,W. W. Norton and Company,2014,pp.126-128.

8 See Exec. Office of the President, Artificial Intelligence, Automation, and the Economy,2016,pp.35-42.

1 See Erik Brynjolfsson, Andrew McAfee,The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies,W. W. Norton and Company,2014,pp.134-138.

2 See Queena Kim, As Our Jobs Are Automated, Some Say Well Need a Guaranteed Basic Income, NPR Weekend Edition,Sept.24,2016.

3 我尤其想到的是羅伯特·希曼(Robert Seamans)在紐約大學(xué)斯特恩商學(xué)院所做的工作。參見Robert Seamans, We Wont Even Know If a Robot Takes Your Job, Forbes,Jan.11,2017。

4 See “Treasury Responds to Suggestion that Robots Pay Income Tax”, in Tax Notes,1984,25,p.20.(“無生命體無需進(jìn)行所得稅申報(bào)”)

5 See Kevin J. Delaney, The Robot that Takes Your Job Should Pay Taxes, Says Bill Gates, Quartz,F(xiàn)eb.17, 2017.

6 See Steve Cousins, Is a “Robot Tax” Really an “Innovation Penalty”?, Techcrunch,Apr. 22, 2017.

7 See Ronald Collins,David Skover, Robotica:Speech Rights and Artificial Intelligence,Cambridge University Press,2018;Annemarie Bridy, “Coding Creativity: Copyright and the Artificially Intelligent Author”, in Stan. Tech. L. Rev.,2012,5,pp.21-27; James Grimmelmann, “Copyright for Literate Robots”,in Iowa L. Rev., 2016, 101,p.670.

8 參見本文第三部分。

9 New State Ice Co. v. Liebmann, 285 U.S. 262, 311 (1932)[布蘭迪斯(Brandeis)法官發(fā)表了反對意見)(闡述了州作為民主實(shí)驗(yàn)室的經(jīng)典概念]。

1 See Andrew Tutt, “An FDA for Algorithms”,in Admin. L. Rev.,2017,69,pp.91-106.

2 See Orin S. Kerr, “The Next Generation Communications Privacy Act”,in U. Pa. L.Rev.,2014,162,pp.375-390.

3 See Woodrow Hartzog, “Unfair and Deceptive Robots”,in Md. L. Rev.,2015,74,pp.812-814.

4 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,p.4.

5 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,pp.6-10(列舉了州政府或聯(lián)邦政府在缺乏專業(yè)知識的情況下應(yīng)對新技術(shù)的各種困難)。

6 See Ryan Calo,The Case for a Federal Robotics Commission, Brookings Institution Center for Technology Innovation,2014,p.3;Tom Krazit, Updated: Washingtons Sen. Cantwell Prepping Bill Calling for AI Committee, Geekwire,July 10, 2017.

7 See Networking and Information Technology Research and Development Subcommittee of National Science and Technology Council,The National Artificial Intelligence Research and Development Strategic Plan,2016,pp.15-22.

8 參見Bryant Walker Smith, “How Governments Can Promote Automated Driving”,in N.M. L. Rev.,2017,47,pp.118-119(探討了有關(guān)自動(dòng)駕駛汽車的采購);Jan Whittington,et al., “Push, Pull, and Spill: A Transdisciplinary Case Study in Municipal Open Government”,in Berkeley Tech. L.J.,2015,30,pp.1908-1909(探討了有關(guān)公開的市政數(shù)據(jù)的采購)。

1 See Loomis v. State, 881 N.W.2d 749, 759,Wis. 2016(雖然被告可能不會(huì)對算法本身提出質(zhì)疑,但他或她仍然可能會(huì)對結(jié)果分?jǐn)?shù)進(jìn)行審查和質(zhì)疑)。

2 See Rebecca Wexler, When a Computer Program Keeps You in Jail, N.Y. Times,June 13, 2017.

3 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016;Peter Stone,et al., Stanford Univ., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016.

4 See Kate Crawford,et al.,The AI NOW Report: The Social and Economic Implications of Artificial Intelligence Technologies in the Near-Term,2016;Peter Stone,et al., Stanford Univ., Artificial Intelligence and Life in 2030: Report of the 2015 Study Panel,2016.

5 參見本文第三部分。

6 這里有一些例子可以追溯到機(jī)器人一詞的起源。See Danny Lewis, 78 Years Ago Today, BBC Aired the First Science Fiction Television Program,Smithsonian,F(xiàn)eb.11, 2016. 這里也有一些例子,比如德國無聲電影的代表《大都會(huì)》(1927年烏發(fā)影業(yè)),美國當(dāng)代電影《機(jī)械姬》(2014年環(huán)球影業(yè))。但是,并不是所有的情形都認(rèn)為機(jī)器人是壞人。例如,陪伴日本成年人童年的動(dòng)畫片《鐵臂阿童木》就是如此,里面的阿童木機(jī)器人就是一個(gè)英雄。Astro Boy [Mighty Atom] (Manga), Tezuka in English, http://tezukainenglish.com/wp/?page_id=138(2017年10月18日訪問)。

7 See Kate Darling, “Whos Johnny?”: Anthropomorphic Framing in Human-Robot Interaction, Integration, and Policy, in Patrick Lin,et al. eds., Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence,Oxford University Press,2017,pp.173-188(探討了擬人化機(jī)器人的影響)。

8 See Ryan Calo, “Digital Market Manipulation”,in Geo. Wash. L. Rev.,2014,82:pp.1001-1002;Ian R. Kerr, “Bots, Babes, and the Californication of Commerce”,in U. Ottawal. and Tech. J.,2004,1,pp.312-317;Christina M. Mulligan, “Perfect Enforcement of Law: When to Limit and When to Use Technology”, in Rich. J.L. and Tech.,2008,14,p.101.

1 See Ryan Calo, “People Can Be So Fake: A New Dimension to Privacy and Technology Scholarship”,in Pa. St. L. Rev.,2009,114,pp.843-846.

2 See Noel Sharkey,et al., Our Sexual Future with Robots: A Foundation for Responsible Robotics Consultation Report,2017,p.1.

3 參見Kate Crawford,Ryan Calo, “There Is a Blind Spot in AI Research”,in Nature,2016,538,pp.311-312(“對人工智能未來影響的擔(dān)憂,正在分散研究人員對已經(jīng)應(yīng)用的系統(tǒng)的真正風(fēng)險(xiǎn)的注意力……”)。

4 參見Sonali Kohli, Bill Gates Joins Elon Musk and Stephen Hawking in Saying Artificial Intelligence Is Scary, Quartz,Jan. 29, 2015(討論了有多少工業(yè)巨頭認(rèn)為人工智能將對人類構(gòu)成威脅)。

5 See generally Nick Bostrom, Superintelligence:Paths, Dangers, Strategies, Oxford University Press,2014(探討“人類迄今為止面臨的最艱巨的挑戰(zhàn)”,并思考我們該如何最好地應(yīng)對)。

6 參見Raffi Khatchadourian, The Doomsday Invention, New Yorker,Nov. 23,2015。在博斯特羅姆的其他作品中,他認(rèn)為我們很可能全部都活在由我們后代所創(chuàng)造的計(jì)算機(jī)虛擬世界中。Nick Bostrom, Are You Living in A Simulation?,in Phil. Q.,2003,53,p.211.這個(gè)觀點(diǎn)包含了一個(gè)有趣的悖論:如果人工智能在未來消滅了我們所有人,那么我們不可能生活在由我們后代所創(chuàng)造的計(jì)算機(jī)虛擬世界中。反之,如果我們真的生活在由我們后代所創(chuàng)造的計(jì)算機(jī)虛擬世界中,那么這意味著人工智能并沒有將我們?nèi)祟惾肯麥纭N艺J(rèn)為,博斯特羅姆在這個(gè)問題上可能犯了錯(cuò)誤。

7 參見Erik Sofge, Why Artificial Intelligence Will Not Obliterate Humanity, Popular Sci.,Mar.19, 2015。澳大利亞計(jì)算機(jī)科學(xué)家瑪麗·安妮·威廉姆斯(Mary Anne Williams)曾經(jīng)對我說過:“自從20世紀(jì)50年代人工智能這一術(shù)語誕生以來,我們一直在研究人工智能,現(xiàn)在機(jī)器人的智能只相當(dāng)于昆蟲級別。”

1 參見Connie Loizos, This Famous Roboticist Doesnt Think Elon Musk Understands AI, Techcrunch,July 19, 2017[引用羅德尼·布魯克斯(Rodney Brooks)的話,人工智能杞人憂天者“有一個(gè)共同點(diǎn):他們自己并不從事人工智能的研發(fā)工作”]。

2 參見Dave Blanchard, Musks Warning Sparks Call for Regulating Artificial Intelligence,NPR,July 19, 2017(引用楊立坤的一項(xiàng)觀察,支配的欲望并不一定與智能關(guān)聯(lián))。

3 參見Daniel Wilson, Robopocalypse: A Novel,Vintage,2012.威爾遜的書是令人興奮的,部分原因在于威爾遜接受過有關(guān)機(jī)器人方面的訓(xùn)練,并有意增加大量準(zhǔn)確的細(xì)節(jié)描寫,以使情節(jié)更加逼真。

4 參見Nick Bostrom, Superintelligence:Paths, Dangers, Strategies,Oxford University Press,2015,p.123.

5 參見Aristotle, Politics, B. Jowett trans., Clarendon Press,1885,p.17(描述了米達(dá)斯國王點(diǎn)石成金的失控力量);《幻想曲》(Fantasia,華特·迪斯尼公司1940年出品。一群神奇的魔法掃帚不停地給大鍋里加水,差一點(diǎn)把米老鼠給淹死)。我把米達(dá)斯國王比作加州大學(xué)伯克利分校著名的計(jì)算機(jī)科學(xué)家斯圖爾特·羅素(Stuart Russell)教授,他是為數(shù)不多與馬斯克等人一樣擔(dān)憂人工智能威脅人類能力的人工智能專家。

6 See Daniel Suarez, Daemon, Signet Books,2009.

7 See Bad Actors and Artificial Intelligence Workshop, The Future of Humanity Inst.,2017.

8 參見Alan Moore, Watchmen, Turtleback Books, 1995,pp.382-390(書中描繪了一個(gè)惡棍工程師從人腦中克隆了一只摧毀紐約的巨大怪物之后引發(fā)的混亂)。

9 參見“Past Events, The Future of Life Inst.”, https://futureoflife.org/past_events,2017年10月18日訪問??疾煳磥砩芯吭海‵uture of Life Institute)過往主辦的活動(dòng)就可發(fā)現(xiàn)這一點(diǎn),該組織致力于“維護(hù)生命和開發(fā)有關(guān)未來的樂觀愿景,包括人類考慮新技術(shù)和新挑戰(zhàn)指導(dǎo)自身方向的積極方式”。

1 Skynet和HAL分別是科幻電影《終結(jié)者》和《2001:太空漫游》中致力于毀滅人類的惡性超級智能?!g者注

2 See Pedro Domingos, Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, Brilliance Audio,2015,p.286.

猜你喜歡
機(jī)器學(xué)習(xí)人工智能
我校新增“人工智能”本科專業(yè)
2019:人工智能
商界(2019年12期)2019-01-03 06:59:05
人工智能與就業(yè)
數(shù)讀人工智能
小康(2017年16期)2017-06-07 09:00:59
基于詞典與機(jī)器學(xué)習(xí)的中文微博情感分析
基于機(jī)器學(xué)習(xí)的圖像特征提取技術(shù)在圖像版權(quán)保護(hù)中的應(yīng)用
基于網(wǎng)絡(luò)搜索數(shù)據(jù)的平遙旅游客流量預(yù)測分析
前綴字母為特征在維吾爾語文本情感分類中的研究
基于支持向量機(jī)的金融數(shù)據(jù)分析研究
下一幕,人工智能!
宽城| 融水| 宕昌县| 平遥县| 陕西省| 丹东市| 平南县| 和林格尔县| 禄劝| 电白县| 阿坝| 峨边| 古浪县| 应用必备| 高密市| 银川市| 青海省| 永靖县| 英吉沙县| 麦盖提县| 望都县| 温泉县| 清远市| 来宾市| 商丘市| 介休市| 大姚县| 衢州市| 陆良县| 宁波市| 新疆| 虹口区| 汉阴县| 射阳县| 新津县| 奇台县| 大庆市| 明溪县| 常山县| 宁晋县| 尉犁县|