国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

人工智能生成的面孔更可信嗎?

2023-04-06 02:29埃米莉威林厄姆陳先宇
英語世界 2023年1期
關(guān)鍵詞:克魯斯面孔湯姆

文/埃米莉·威林厄姆 譯/陳先宇

When TikTok videos emerged in 2021 that seemed to show “Tom Cruise” making a coin disappear and enjoying a lollipop, the account name was the only obvious clue that this wasn’t the real deal.The creator of the“deeptomcruise” account on the social media platform was using “deepfake”technology to show a machine-generated version of the famous actor performing magic tricks and having a solo danceoff.

2One tell for a deepfake used to be the “uncanny valley” effect, an unsettling feeling triggered by the hollow look in a synthetic1synthetic 合成的,人造的。person’s eyes.But increasingly convincing images are pulling viewers out of the valley and into the world of deception promulgated by deepfakes.

3The startling realism has implications for malevolent2malevolent 惡毒的。uses of the technology: its potential weaponization in disinformation campaigns for political or other gain, the creation of false porn for blackmail, and any number of intricate manipulations for novel forms of abuse and fraud.Developing countermeasures to identify deepfakes has turned into an“arms race” between security sleuths on one side and cybercriminals and cyberwarfare operatives on the other.

2021 年,抖音海外版上出現(xiàn)了幾個(gè)“湯姆·克魯斯”的視頻。視頻中的“湯姆·克魯斯”或表演硬幣消失魔術(shù),或在吃棒棒糖,只有賬號(hào)名能清楚地表明視頻內(nèi)容并不真實(shí)。在抖音海外版上創(chuàng)建“深度湯姆克魯斯”賬號(hào)的人正是使用了“深度偽造”技術(shù),通過機(jī)器生成知名影星湯姆·克魯斯的圖像,讓其表演魔術(shù)和獨(dú)舞。

2以往辨別深度偽造的要素是“恐怖谷”效應(yīng),即人們看到合成人空洞漠然的眼神時(shí)會(huì)感到不安。但日趨逼真的圖像正將觀者拉出深谷,帶入深度偽造所宣揚(yáng)的欺騙世界。

3深度偽造技術(shù)能達(dá)到的真實(shí)程度讓人吃驚,這意味著存在惡意使用該技術(shù)的可能:用作虛假宣傳的武器,以獲取政治或其他方面的利益;用于制造虛假色情內(nèi)容進(jìn)行敲詐;通過一些復(fù)雜操作,實(shí)施新型虐待和詐騙。開發(fā)識(shí)別深度偽造的反制手段已演變?yōu)橐粓觥败妭涓傎悺?,競賽一方是安全“偵探”,另一方則是網(wǎng)絡(luò)罪犯和網(wǎng)戰(zhàn)特工。

4A new study published in the Proceedings of the National Academy of Sciences of the United States of America provides a measure of how far the technology has progressed.The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article.“We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid,a professor at the University of California, Berkeley.The result raises concerns that “these faces could be highly effective when used for nefarious3nefarious 邪惡的,不道德的。purposes.”

5“We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper.The tools used to generate the study’s still images are already generally accessible.And although creating equally sophisticated video is more challenging,tools for it will probably soon be within general reach, Didyk contends.

4《美國國家科學(xué)院院刊》上發(fā)表了一份新的研究報(bào)告,該報(bào)告對(duì)深度偽造技術(shù)的發(fā)展程度進(jìn)行了評(píng)估。研究結(jié)果表明,真人易為機(jī)器生成的面孔所騙,甚至認(rèn)為其比真實(shí)人臉更可信。報(bào)告合著者、加利福尼亞大學(xué)伯克利分校教授哈尼·法里德說:“我們發(fā)現(xiàn),合成人臉不僅非常逼真,而且被認(rèn)為比真實(shí)人臉更可信。”這一結(jié)果引發(fā)了人們的擔(dān)憂——“使用合成人臉行不法之事可能很有效果”。

5“我們確實(shí)已進(jìn)入危險(xiǎn)的深度偽造世界。”未參與上述研究的瑞士意大利語區(qū)大學(xué)(位于盧加諾)副教授彼得·迪迪克如此說道。研究所用生成靜態(tài)圖像的工具已普及。迪迪克認(rèn)為,盡管同樣復(fù)雜的視頻較難制作,但公眾也許很快就能用上相關(guān)工具。

6這項(xiàng)研究使用的合成人臉是在兩個(gè)神經(jīng)網(wǎng)絡(luò)反復(fù)交互往來的過程中生成的。這兩個(gè)網(wǎng)絡(luò)是典型的生成對(duì)抗網(wǎng)絡(luò)。其中一個(gè)名為生成器,生成一系列不斷演變的合成人臉,就像一名學(xué)生逐步完成草圖一樣。另一個(gè)名為鑒別器,對(duì)真實(shí)圖像進(jìn)行學(xué)習(xí)后,通過比對(duì)真實(shí)人臉的數(shù)據(jù),評(píng)定生成器輸出的圖像。

6The synthetic faces for this study were developed in back-and-forth interactions between two neural networks,examples of a type known as generative adversarial networks.One of the networks, called a generator, produced an evolving series of synthetic faces like a student working progressively through rough drafts.The other network, known as a discriminator, trained on real images and then graded the generated output by comparing it with data on actual faces.

7The generator began the exercise with random pixels.With feedback from the discriminator, it gradually produced increasingly realistic humanlike faces.Ultimately, the discriminator was unable to distinguish a real face from a fake one.

8The networks trained on an array of real images representing Black, East Asian, South Asian and white faces of both men and women, in contrast with the more common use of white men’s faces in earlier research.

9After compiling 400 real faces matched to 400 synthetic versions, the researchers asked 315 people to distinguish real from fake among a selection of 128 of the images.Another group of 219 participants got some training and feedback about how to spot fakes as they tried to distinguish the faces.Finally,a third group of 223 participants each rated a selection of 128 of the images for trustworthiness on a scale of one (very untrustworthy) to seven (very trustworthy).

7生成器從隨機(jī)像素開始訓(xùn)練。得益于鑒別器的反饋,生成器生成的人臉越來越逼真,直至鑒別器無法區(qū)分真假面孔。

8與早期研究更常用白人男性面孔不同,兩個(gè)神經(jīng)網(wǎng)絡(luò)的訓(xùn)練素材是大量再現(xiàn)黑人、東亞人、南亞人以及白人男女面孔的真實(shí)圖像。

9研究人員先匯集了400 張真實(shí)人臉及與之匹配的400 張合成人臉,然后從中選擇128張,要求315 名受試者辨別真假。另一組219 名受試者在進(jìn)行辨別時(shí)獲得了一定的培訓(xùn)和反饋,其內(nèi)容涉及如何識(shí)別出假面孔。第三組223 名受試者對(duì)選出的128 張圖像進(jìn)行可信度評(píng)分,評(píng)分范圍為1(非常不可信)到7(非??尚牛?。

10第一組受試者辨別真假人臉完全靠猜,平均準(zhǔn)確率為48.2%。第二組的準(zhǔn)確率也沒高多少,僅為59%左右,即便他們了解到那些受試者選擇的反饋信息也沒有用。在第三組的可信度評(píng)分中,合成人臉的平均得分略高,為4.82,而真實(shí)人臉的平均得分為4.48。

10The first group did not do better than a coin toss4coin toss 擲硬幣,此處引申為“沒把握,碰運(yùn)氣”。at telling real faces from fake ones, with an average accuracy of 48.2 percent.The second group failed to show dramatic improvement,receiving only about 59 percent, even with feedback about those participants’choices.The group rating trustworthiness gave the synthetic faces a slightly higher average rating of 4.82, compared with 4.48 for real people.

11The researchers were not expecting these results.“We initially thought that the synthetic faces would be less trustworthy than the real faces,” says study co-author Sophie Nightingale.

12The uncanny valley idea is not completely retired.Study participants did overwhelmingly identify some of the fakes as fake.“We’re not saying that every single image generated is indistinguishable from a real face, but a significant number of them are,” Nightingale says.

11上述結(jié)果出乎研究人員的預(yù)料?!拔覀冏畛跽J(rèn)為合成人臉的可信度要比真實(shí)人臉低?!闭撐暮现咚鞣啤つ瓮⒏駹柸缡钦f。

12恐怖谷效應(yīng)并沒有完全退去。絕大多數(shù)受試者都認(rèn)為其中一些合成人臉是假的。奈廷格爾說:“我們并不是說,生成的每張圖像都難以與真實(shí)人臉區(qū)分開來,但其中相當(dāng)一部分確實(shí)如此。”

13這一發(fā)現(xiàn)增加了人們對(duì)技術(shù)可及性的擔(dān)憂,因?yàn)橛辛嗽摷夹g(shù),幾乎人人都可創(chuàng)建欺騙性的靜態(tài)圖像。奈廷格爾說:“一個(gè)人即使沒有Photoshop 或CGI 的專業(yè)知識(shí),也能創(chuàng)建合成內(nèi)容?!蹦霞永D醽喆髮W(xué)視覺智能與多媒體分析實(shí)驗(yàn)室創(chuàng)始負(fù)責(zé)人瓦埃勒·阿布德-阿爾馬吉德沒有參與上述研究,但他表示:另一個(gè)擔(dān)憂是研究結(jié)果會(huì)給人留下一種印象,即深度偽造將完全無法檢測出來。阿布德-阿爾馬吉德?lián)?,科學(xué)家可能會(huì)放棄開發(fā)針對(duì)深度偽造的對(duì)策,盡管他認(rèn)為保持檢測技術(shù)與深度偽造不斷提高的真實(shí)性同步發(fā)展,“不過是又一個(gè)取證問題”。

13The finding adds to concerns about the accessibility of technology that makes it possible for just about anyone to create deceptive still images.“Anyone can create synthetic content without specialized knowledge of Photoshop or CGI5= computer-generated imagery計(jì)算機(jī)生成圖像。,” Nightingale says.Another concern is that such findings will create the impression that deepfakes will become completely undetectable, says Wael Abd-Almageed, founding director of the Visual Intelligence and Multimedia Analytics Laboratory at the University of Southern California, who was not involved in the study.He worries scientists might give up on trying to develop countermeasures to deepfakes, although he views keeping their detection on pace with their increasing realism as “simply yet another forensics6forensics 取證。problem.”

14“The conversation that’s not happening enough in this research community is how to start proactively to improve these detection tools,” says Sam Gregory, director of programs strategy and innovation at WITNESS7自1992 年以來,WITNESS 一直致力于讓世界各地的任何人都能利用視頻和技術(shù)的力量來爭取人權(quán)。該組織通過向數(shù)百萬人提供相關(guān)培訓(xùn)、支持和工具,動(dòng)員有能力改變世界的21 世紀(jì)新一代維權(quán)人士積極參與。WITNESS 憑借其龐大的全球合作伙伴網(wǎng)絡(luò),幫助受害者公開反對(duì)強(qiáng)權(quán)和勇敢面對(duì)不公正待遇。, a human rights organization that in part focuses on ways to distinguish deepfakes.Making tools for detection is important because people tend to overestimate their ability to spot fakes, he says, and “the public always has to understand when they’re being used maliciously.”

14WITNESS 是一家人權(quán)組織,其重點(diǎn)關(guān)注的領(lǐng)域之一便是深度偽造檢測方法。該組織的項(xiàng)目戰(zhàn)略與創(chuàng)新主管薩姆·格雷戈里說:“研究界還沒有充分討論如何積極主動(dòng)地開始改進(jìn)檢測工具?!彼€表示,開發(fā)檢測工具非常重要,因?yàn)槿藗兺鶗?huì)高估自己識(shí)別假貨的能力,而“公眾永遠(yuǎn)都必須了解自己何時(shí)被惡意利用了”。

15Gregory, who was not involved in the study, points out that its authors directly address these issues.They highlight three possible solutions, including creating durable watermarks for these generated images, “l(fā)ike embedding fingerprints so you can see that it came from a generative process,” he says.

15格雷戈里沒有參與上述研究,但他指出研究報(bào)告的作者直接探討了相關(guān)問題。他們重點(diǎn)提出了3 種可行的解決方案,包括在生成的圖像上添加持久水印,“像嵌入指紋一樣,這樣你就可以看出它是人工合成而來的?!备窭赘昀镎f。

16The authors of the study end with a stark conclusion after emphasizing that deceptive uses of deepfakes will continue to pose a threat: “We, therefore, encourage those developing these technologies to consider whether the associated risks are greater than their benefits,”they write.“If so, then we discourage the development of technology simply because it is possible.” ■

16報(bào)告作者強(qiáng)調(diào),利用深度偽造行騙將繼續(xù)構(gòu)成威脅,最后他們得出立場明確的結(jié)論:“因此,我們敦促技術(shù)開發(fā)者考慮相關(guān)風(fēng)險(xiǎn)是否大于收益?!彼麄儗懙溃叭绻?,那我們就不鼓勵(lì)該技術(shù)的發(fā)展,只因其確實(shí)可能弊大于利?!?□

猜你喜歡
克魯斯面孔湯姆
本期面孔
多變的面孔
貪吃的湯姆
自然面孔
掉錢
不怕你惦記
湯姆·克魯斯
烙餅超市
高樓上的上帝
我們的面孔
宁陵县| 杨浦区| 大邑县| 清河县| 罗源县| 巴彦淖尔市| 调兵山市| 新宾| 廉江市| 邳州市| 图片| 黄浦区| 华蓥市| 手游| 海城市| 天门市| 福泉市| 旬阳县| 田林县| 双牌县| 菏泽市| 巢湖市| 淅川县| 宁波市| 芜湖县| 长岭县| 基隆市| 汉寿县| 盘山县| 楚雄市| 启东市| 比如县| 鸡西市| 亚东县| 乌兰浩特市| 白沙| 惠州市| 新竹市| 轮台县| 乌兰察布市| 上栗县|