国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)人臉表情識別

2019-10-31 09:21李旻擇李小霞王學(xué)淵孫維
計(jì)算機(jī)應(yīng)用 2019年9期
關(guān)鍵詞:人臉檢測遷移學(xué)習(xí)卷積神經(jīng)網(wǎng)絡(luò)

李旻擇 李小霞 王學(xué)淵 孫維

摘 要:針對人臉表情識別的泛化能力不足、穩(wěn)定性差以及速度慢難以滿足實(shí)時(shí)性要求的問題,提出了一種基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)人臉表情識別方法。首先,提出改進(jìn)的MobileNet結(jié)合單發(fā)多盒檢測器(MSSD)輕量化人臉檢測網(wǎng)絡(luò),并利用核相關(guān)濾波(KCF)模型對檢測到的人臉坐標(biāo)信息進(jìn)行跟蹤來提高檢測速度和穩(wěn)定性;然后,使用三種不同尺度卷積核的線性瓶頸層構(gòu)成三條支路,用通道合并的特征融合方式形成多尺度核卷積單元,利用其多樣性特征來提高表情識別的精度;最后,為了提升模型泛化能力和防止過擬合,采用不同的線性變換方式進(jìn)行數(shù)據(jù)增強(qiáng)來擴(kuò)充數(shù)據(jù)集,并將FER-2013人臉表情數(shù)據(jù)集上訓(xùn)練得到的模型遷移到小樣本CK+數(shù)據(jù)集上進(jìn)行再訓(xùn)練。實(shí)驗(yàn)結(jié)果表明,所提方法在FER-2013數(shù)據(jù)集上的識別率達(dá)到73.0%,較Kaggle表情識別挑戰(zhàn)賽冠軍提高了1.8%,在CK+數(shù)據(jù)集上的識別率高達(dá)99.5%。對于640×480的視頻,人臉檢測速度達(dá)到每秒158幀,是主流人臉檢測網(wǎng)絡(luò)多任務(wù)級聯(lián)卷積神經(jīng)網(wǎng)絡(luò)(MTCNN)的6.3倍,同時(shí)人臉檢測和表情識別整體速度達(dá)到每秒78幀。因此所提方法能夠?qū)崿F(xiàn)快速精確的人臉表情識別。

關(guān)鍵詞:人臉表情識別;卷積神經(jīng)網(wǎng)絡(luò);人臉檢測;核相關(guān)濾波;遷移學(xué)習(xí)

中圖分類號:TP391.4

文獻(xiàn)標(biāo)志碼:A

Real-time facial expression recognition based on convolutional neural network with multi-scale kernel feature

LI Minze1, LI Xiaoxia1,2*, WANG Xueyuan1,2, SUN Wei2

1.School of Information Engineering, Southwest University of Science and Technology, Mianyang Sichuan 621010, China;

2.Key Laboratory of Special Environmental Robotics in Sichuan Province (Southwest University of Science and Technology), Mianyang Sichuan 621010, China

Abstract:

Aiming at the problems of insufficient generalization ability, poor stability and difficulty in meeting the real-time requirement of facial expression recognition, a real-time facial expression recognition method based on multi-scale kernel feature convolutional neural network was proposed. Firstly, an improved MSSD (MobileNet+Single Shot multiBox Detector) lightweight face detection network was proposed, and the detected face coordinates information was tracked by Kernel Correlation Filter (KCF) model to improve the detection speed and stability. Then, three linear bottlenecks of three different scale convolution kernels were used to form three branches. The multi-scale kernel convolution unit was formed by the feature fusion of channel combination, and the diversity feature was used to improve the accuracy of expression recognition. Finally, in order to improve the generalization ability of the model and prevent over-fitting, different linear transformation methods were used for data enhancement to augment the dataset, and the model trained on the FER-2013 facial expression dataset was migrated to the small sample CK+ dataset for retraining. The experimental results show that the recognition rate of the proposed method on the FER-2013 dataset reaches 73.0%, which is 1.8% higher than that of the Kaggle Expression Recognition Challenge champion, and the recognition rate of the proposed method on the CK+ dataset reaches 99.5%. For 640×480 video, the face detection speed of the proposed method reaches 158 frames per second, which is 6.3 times of that of the mainstream face detection network MTCNN (MultiTask Cascaded Convolutional Neural Network). At the same time, the overall speed of face detection and expression recognition of the proposed method reaches 78 frames per second. It can be seen that the proposed method can achieve fast and accurate facial expression recognition.

別的速度,因此采用深度可分離卷積來構(gòu)建網(wǎng)絡(luò)。在MSSD網(wǎng)絡(luò)中,輸入端通過1個(gè)卷積核大小為3×3、步長為2的標(biāo)準(zhǔn)卷積層,再經(jīng)過13個(gè)深度可分離卷積層,后面輸出端連接了4個(gè)卷積核分別為1×1、3×3交替組合的標(biāo)準(zhǔn)卷積層和1個(gè)最大池化層,考慮到池化層會(huì)損失一部分有效特征,因此在網(wǎng)絡(luò)的標(biāo)準(zhǔn)卷積層中使用了步長為2的卷積核替代池化層。

網(wǎng)絡(luò)淺層特征的感受野較小,擁有更多的細(xì)節(jié)信息,對小目標(biāo)的檢測更具優(yōu)勢,因此MSSD人臉檢測網(wǎng)絡(luò)采用淺層與深層特征融合的方式。經(jīng)實(shí)驗(yàn)分析,將第7層的淺層特征與深層特征融合時(shí)效果最好,因此網(wǎng)絡(luò)采用第7、15、16、17、18、19層的融合特征。網(wǎng)絡(luò)先將這六層的特征圖分別重新調(diào)整為一維向量,再進(jìn)行串聯(lián)融合,實(shí)現(xiàn)多尺度人臉檢測。

2.2 結(jié)合跟蹤模型的人臉檢測

為了進(jìn)一步地提高檢測速度,將人臉檢測網(wǎng)絡(luò)和跟蹤模型相結(jié)合,形成檢測跟蹤檢測的模式。這樣的結(jié)合方式不僅有效地提高了人臉檢測的速度,還可處理多角度、有遮擋的人臉檢測問題。跟蹤模型是基于統(tǒng)計(jì)學(xué)習(xí)的跟蹤算法KCF,該算法主要使用了輪轉(zhuǎn)矩陣對樣本進(jìn)行采集,然后使用快速傅里葉變換對其進(jìn)行加速運(yùn)算,這使得該算法的跟蹤效果和速度都大大提升。先利用MSSD模型對人臉進(jìn)行檢測,并進(jìn)行KCF跟蹤模型更新;然后,將檢測到的人臉坐標(biāo)信息輸入跟蹤模型KCF中,以此作為人臉基礎(chǔ)樣本框并采用檢測1幀跟蹤10幀的策略來進(jìn)行跟蹤;最后,為了防止跟蹤丟失,再次進(jìn)行MSSD模型更新,重新對人臉進(jìn)行檢測。圖3為結(jié)合跟蹤的人臉檢測流程。

3 多尺度核特征人臉表情識別網(wǎng)絡(luò)

3.1 深度可分離卷積

Howard等[24]在2017年提出MobileNet,對標(biāo)準(zhǔn)卷積進(jìn)行了分解,分為了深度卷積和點(diǎn)卷積兩個(gè)部分,共同構(gòu)成深度可分離卷積,標(biāo)準(zhǔn)卷積核與深度可分離卷積核的對比如圖4(a)和圖4(b)所示。

假設(shè)輸入特征圖尺寸為DF×DF,通道數(shù)為M,卷積核大小為DK×DK,卷積核個(gè)數(shù)為N。

對于同樣的輸入和輸出,標(biāo)準(zhǔn)卷積過程計(jì)算量為:DK×DK×M×N×DF×DF,深度可分離卷積過程計(jì)算量為:DK×DK×1×M×DF×DF+1×1×M×N×DF×DF。

通過以上可知深度可分離卷積方式與標(biāo)準(zhǔn)卷積方式的計(jì)算量比例為:

(DK×DK×1×M×DF×DF+1×1×M×N×DF×DF)/

(DK×DK×M×N×DF×DF)=(1/N)+(1/D2K)(1)

對于卷積核大小為3×3的卷積過程,計(jì)算量可減少至原來1/9??梢娺@樣的結(jié)構(gòu)使其極大地減少了計(jì)算量,有效提高了訓(xùn)練與識別的速度。

3.2 多尺度核卷積單元

多尺度核卷積單元主要以深度可分離卷積為基礎(chǔ),分支中采用了MobileNetV2[25]的線性瓶頸層結(jié)構(gòu)并對其進(jìn)行了改進(jìn),將其中的非線性激活函數(shù)改為PReLU[26],圖5是改進(jìn)的線性瓶頸層(bottleneck_p)結(jié)構(gòu)。

深度卷積(圖中為Dw_Conv)作為特征提取部分,點(diǎn)卷積(圖中為Conv 1×1)作為瓶頸層進(jìn)行通道數(shù)的縮放,并且輸出端的點(diǎn)卷積采用的是線性結(jié)構(gòu),因?yàn)樵撎廃c(diǎn)卷積是用于通道數(shù)的壓縮,若再進(jìn)行非線性操作,則會(huì)損失大量有用特征。圖6是多尺度核卷積單元結(jié)構(gòu)圖,它包含了三條分支,每個(gè)分支均采用步長為2的改進(jìn)的線性瓶頸層結(jié)構(gòu)。通過三個(gè)不同深度卷積核大小的分支并聯(lián)形成的多尺度核卷積單元,融合了不同卷積核大小提取的多樣性特征,進(jìn)而有效地提高人臉表情的識別率。

為了說明多尺度核特征的有效性以及卷積核大小的選取,用表1所示網(wǎng)絡(luò)結(jié)構(gòu)進(jìn)行了10組對比實(shí)驗(yàn)。表1是在FER-2013上的多尺度核特征有效性評估結(jié)果。實(shí)驗(yàn)1是將多尺度核卷積單元改為核大小為3×3的標(biāo)準(zhǔn)卷積進(jìn)行的實(shí)驗(yàn),實(shí)驗(yàn)2~6是將多尺度核卷積單元的三條支路均使用同一大小的卷積核進(jìn)行的實(shí)驗(yàn),實(shí)驗(yàn)7~10是改變多尺度核卷積單元三條支路的卷積核大小進(jìn)行的實(shí)驗(yàn)。實(shí)驗(yàn)1~6表明網(wǎng)絡(luò)使用適當(dāng)卷積核大小的單一尺度核卷積單元比不使用的識別率更高;實(shí)驗(yàn)2~6表明具有單一尺度核卷積單元的網(wǎng)絡(luò)使用3×3卷積核的效果比其他卷積核大小更好;實(shí)驗(yàn)2~10表明除了實(shí)驗(yàn)9的情況外,多尺度核卷積單元比單一尺度核卷積單元更有效,同時(shí)實(shí)驗(yàn)9的情況說明了多尺度核卷積單元的三個(gè)卷積核不能都取比較大的尺寸。

通過以上分析,多尺度核卷積單元的核大小選取了3×3、11×11、19×19三種最優(yōu)尺度,使用多尺度核卷積比標(biāo)準(zhǔn)卷積的識別率提升了3.2%。

在多尺度核卷積單元中,除了用于壓縮的點(diǎn)卷積不使用非線性激活函數(shù)外,其他卷積層均使用PReLU激活函數(shù)。式(2)、式(3)分別是激活函數(shù)ReLU[27]和PReLU的表達(dá)式,i表示不同通道。

ReLU (xi ) = xi , xi>0

0,xi≤0(2)

PReLU (xi) = xi , xi>0

aixi ,xi≤0(3)

ReLU激活函數(shù)是將所有負(fù)值都設(shè)為0,其余保持不變。當(dāng)訓(xùn)練過程中有較大梯度經(jīng)過ReLU時(shí),會(huì)引起輸入數(shù)據(jù)產(chǎn)生巨大變化,會(huì)出現(xiàn)大多數(shù)輸入是負(fù)數(shù)的情況,這種情況下會(huì)導(dǎo)致神經(jīng)元永久性失活,梯度永遠(yuǎn)為0,無法繼續(xù)進(jìn)行網(wǎng)絡(luò)權(quán)重的更新。然而在PReLU中修正了數(shù)據(jù)的分布,使得一部分負(fù)值也能夠得以保留,很好地解決了ReLU中存在的問題,并且式(3)中的參數(shù)ai可以通過訓(xùn)練得到,能夠根據(jù)數(shù)據(jù)的變化而變化,靈活性與適應(yīng)性更強(qiáng)。

通過以上分析,將不同激活函數(shù)對多尺度核特征人臉表情識別效果進(jìn)行了對比,表2是不同激活函數(shù)在FER-2013數(shù)據(jù)集上的識別率,可知使用PReLU比ReLU的識別率高1.8個(gè)百分點(diǎn),因此選擇PReLU作為激活函數(shù)。

3.3 多尺度核特征網(wǎng)絡(luò)

用于人臉表情識別的多尺度核特征網(wǎng)絡(luò)結(jié)構(gòu)如表3所示。表中multi_conv2d、bottleneck_p(1~5)分別表示3.2節(jié)介紹的多尺度核卷積單元和改進(jìn)的線性瓶頸層。網(wǎng)絡(luò)的輸入首先經(jīng)過一個(gè)多尺度核卷積單元(multi_conv2d),采用6倍的擴(kuò)張系數(shù),每個(gè)分支采用16個(gè)卷積核進(jìn)行卷積,輸出通道數(shù)為16,步長為2,再將三分支特征進(jìn)行融合,輸出通道數(shù)變?yōu)?8;然后經(jīng)過12個(gè)改進(jìn)的線性瓶頸層,每層的深度卷積核大小均使用3×3,并且在訓(xùn)練期間進(jìn)行數(shù)據(jù)的批量歸一化;最后會(huì)通過一個(gè)卷積核大小為1×1、步長為1的標(biāo)準(zhǔn)卷積層和一個(gè)核大小為3×3的平均池化層;輸出端的分類器設(shè)計(jì)采用了全卷積神經(jīng)網(wǎng)絡(luò)的分類策略,使用了步長為1、核大小為1×1、輸出通道數(shù)為7(7類表情)的標(biāo)準(zhǔn)卷積層來替代全連接層,加快表情識別速度。

4 實(shí)驗(yàn)結(jié)果及分析

實(shí)驗(yàn)配置如下:

中央處理器(Central Processing Unit, CPU):Inter Core i7-7700K,主頻為4.20GHz,內(nèi)存為16GB;圖像處理器(Graphic Processing Unit, GPU):GeForce GTX 1080Ti,顯存為12GB。

4.1 數(shù)據(jù)集

實(shí)驗(yàn)中用到了三種數(shù)據(jù)集:WIDER FACE[30]、CK+[13]和FER-2013[14]。

WIDER FACE數(shù)據(jù)集為人臉檢測基準(zhǔn)數(shù)據(jù)集,共包含了32203張圖像,并對393703個(gè)面部進(jìn)行了標(biāo)記,具有不同的尺寸、姿勢、遮擋、表情、光照以及化妝的人臉。所有的圖像被分為61類,每類隨機(jī)選擇40%作為訓(xùn)練集、10%作為驗(yàn)證集、50%作為測試集,即訓(xùn)練集12881張、驗(yàn)證集3220張、測試集16102張。

CK+人臉表情數(shù)據(jù)集包括123個(gè)人,593個(gè)圖像序列,每個(gè)圖像序列的最后一張都有動(dòng)作單元標(biāo)簽,而其中327個(gè)圖像序列有表情標(biāo)簽,被標(biāo)注為七類表情標(biāo)簽:憤怒、鄙視、厭惡、恐懼、高興、悲傷和驚訝。但是在其他的表情數(shù)據(jù)集中沒有鄙視這類表情,為了和其他數(shù)據(jù)集能夠相互兼容,因此去掉了鄙視這類表情。

FER-2013是Kaggle人臉表情識別挑戰(zhàn)賽提供的一個(gè)人臉表情數(shù)據(jù)集。該數(shù)據(jù)集總共包含35887張表情圖像,分為7類基本表情:憤怒、厭惡、恐懼、高興、悲傷、驚訝和中性。FER2013已被挑戰(zhàn)賽舉辦方分為了三部分:訓(xùn)練集28709張、公共測試集3589張和私有測試集3589張。在訓(xùn)練時(shí)將公共測試集作為驗(yàn)證集,私有測試集作為最終指標(biāo)判斷的測試集,該數(shù)據(jù)集包含了不同年齡、不同角度的人臉表情,并且分辨率也相對較低,很多圖片還有手、頭發(fā)和圍巾等的遮擋,非常具有挑戰(zhàn)性,很符合真實(shí)環(huán)境中的條件。

4.2 數(shù)據(jù)增強(qiáng)

為了增強(qiáng)人臉表情識別模型對噪聲和角度變換等干擾的穩(wěn)定性,對實(shí)驗(yàn)數(shù)據(jù)集進(jìn)行了數(shù)據(jù)增強(qiáng),對每張圖像都使用了不同的線性變換方式進(jìn)行增強(qiáng),如圖7所示。進(jìn)行數(shù)據(jù)增強(qiáng)的變換有隨機(jī)水平翻轉(zhuǎn)、比例為0.1的水平和豎直方向偏移、比例為0.1的隨機(jī)縮放、在(-10,10)之間進(jìn)行隨機(jī)轉(zhuǎn)動(dòng)角度、歸一化為零均值和單位方差向量,并對變換過程中出現(xiàn)的空白區(qū)域按照最近像素點(diǎn)進(jìn)行填充。

4.3 人臉檢測實(shí)驗(yàn)結(jié)果

對于結(jié)合跟蹤的MSSD人臉檢測網(wǎng)絡(luò),先將MSSD的基礎(chǔ)網(wǎng)絡(luò)MobileNet在ImageNet[31]1000分類的大型圖像數(shù)據(jù)庫上進(jìn)行預(yù)訓(xùn)練;然后再將預(yù)訓(xùn)練好的模型遷移到MSSD網(wǎng)絡(luò)中,用人臉檢測基準(zhǔn)數(shù)據(jù)庫WIDER FACE進(jìn)行微調(diào);最后用WIDER FACE的測試集進(jìn)行測試。圖8是測試集中部分圖片檢測結(jié)果,可知MSSD人臉檢測網(wǎng)絡(luò)對多尺寸、多角度和遮擋等均具有較好的檢測效果,穩(wěn)定性強(qiáng)。

在檢測速度方面,使用大小為640×480的視頻進(jìn)行測試,取視頻的前3000幀來計(jì)算平均處理速度,并與主流的人臉檢測網(wǎng)絡(luò)模型進(jìn)行了對比實(shí)驗(yàn)。表4是不同方法人臉檢測速度對比結(jié)果。MSSD網(wǎng)絡(luò)人臉檢測速度為63幀/s,再結(jié)合KCF跟蹤器,速度可達(dá)158幀/s。多任務(wù)級聯(lián)卷積神經(jīng)網(wǎng)絡(luò)(MultiTask Cascaded Convolutional Neural Network, MTCNN)是主流的人臉檢測網(wǎng)絡(luò),本文方法的檢測速度是它的6.3倍,優(yōu)勢非常明顯。

4.4 人臉表情識別實(shí)驗(yàn)結(jié)果

人臉表情識別實(shí)驗(yàn)主要是在FER-2013和CK+兩個(gè)數(shù)據(jù)上進(jìn)行訓(xùn)練和測試,在訓(xùn)練過程中均采用隨機(jī)初始化權(quán)重和偏置,批量大小為16,初始學(xué)習(xí)率為0.01,并且采用了訓(xùn)練自動(dòng)停止策略,即出現(xiàn)過擬合現(xiàn)象時(shí),訓(xùn)練經(jīng)過20個(gè)循環(huán)后自動(dòng)停止并保存模型。

模型訓(xùn)練過程使用FER-2013的訓(xùn)練集(28709張)進(jìn)行訓(xùn)練,公共測試集(3589張)作為驗(yàn)證集來調(diào)整模型的權(quán)重參數(shù),最后用私有測試集(3589張)進(jìn)行最后的測試。然后與目前先進(jìn)的表情識別網(wǎng)絡(luò)進(jìn)行了對比。表5第一部分是不同方法在FER-2013上的識別率對比結(jié)果??芍疚姆椒▋?yōu)于其他主流方法,達(dá)到了73.0%的識別率,比Kaggle人臉表情識別挑戰(zhàn)賽冠軍Tang[16]的識別率提高了1.8個(gè)百分點(diǎn),同時(shí)識別速度達(dá)到了154幀/s。

在CK+數(shù)據(jù)集上的實(shí)驗(yàn)采用了遷移學(xué)習(xí)方法,將模型在FER-2013上訓(xùn)練得到的權(quán)重參數(shù)作為預(yù)訓(xùn)練結(jié)果,然后在CK+上進(jìn)行微調(diào),并采用10折交叉驗(yàn)證對模型性能進(jìn)行評估。表5第二部分是不同方法在CK+數(shù)據(jù)集上的識別率對比,本文方法取得了99.5%的最高識別率。

表6和表7分別是在FER-2013和CK+兩個(gè)數(shù)據(jù)集上的識別結(jié)果混淆矩陣。在數(shù)據(jù)集FER-2013中,高興的識別率最高為90.0%,其次是驚訝和厭惡,對恐懼和悲傷的識別率相對較低。從表7可看出造成這兩者識別率較低的原因是這兩類表情容易相互混淆。為了更直觀地對這兩類表情進(jìn)行分析,圖9給出了FER-2013中的恐懼和悲傷兩類表情圖像,可知在該數(shù)據(jù)集中恐懼和悲傷兩類表情極易混淆,人工都很難進(jìn)行準(zhǔn)確判斷。在數(shù)據(jù)集CK+中,其數(shù)據(jù)集較小并且沒有FER-2013中那么多的標(biāo)簽噪聲,同時(shí)又全是清晰的正面表情照片,因此本文方法在該數(shù)據(jù)集中除了厭惡之外的各類表情識別率均為100%,僅將厭惡表情中的3%識別為了憤怒,整體識別率高達(dá)99.5%。

5 結(jié)語

針對人臉表情識別的泛化能力不足、穩(wěn)定性差以及速度難以達(dá)到實(shí)時(shí)性要求的問題,提出了一種基于多尺度核特征卷積神經(jīng)網(wǎng)絡(luò)的實(shí)時(shí)穩(wěn)定人臉表情識別方法。用檢測加跟蹤的模式進(jìn)行人臉檢測,實(shí)現(xiàn)了158幀/s的快速穩(wěn)定人臉檢測,而且多尺度核特征表情識別網(wǎng)絡(luò)在FER-2013和CK+數(shù)據(jù)集上分別達(dá)到了73.0%和99.5%的高識別率。整個(gè)系統(tǒng)采用輕量化網(wǎng)絡(luò)結(jié)構(gòu),總體處理速度高達(dá)78幀/s。精度和速度都能滿足實(shí)際需求。在后續(xù)的研究中,可以利用反卷積等方法可視化各層特征,結(jié)合高低層有效特征進(jìn)一步提高網(wǎng)絡(luò)的精度。另外,可以采用更加接近真實(shí)環(huán)境的表情數(shù)據(jù)集進(jìn)行訓(xùn)練,并且增加疼痛之類的表情類別,使得理論研究能夠與實(shí)際相結(jié)合,將該方法使用在醫(yī)療監(jiān)護(hù)等的實(shí)際場景中。

參考文獻(xiàn)

[1]EKMAN P. Contacts across cultures in the face and emotion [J]. Journal of Personality and Social Psychology, 1971, 17(2): 124-129.

[2]ZHAO X, ZHANG S. Facial expression recognition based on local binary patterns and kernel discriminant isomap [J]. Sensors, 2011, 11(10): 9573-9588.

[3]KUMAR P, HAPPY S L, ROUTRAY A. A real-time robust facial expression recognition system using HOG features [C]// CAST 2016: Proceedings of the 2016 International Conference on Computing, Analytics and Security Trends. Piscataway, NJ: IEEE, 2016: 289-293.

[4]劉帥師,田彥濤,萬川.基于Gabor多方向特征融合與分塊直方圖的人臉表情識別方法[J]. 自動(dòng)化學(xué)報(bào),2011,37(12):1455-1463.(LIU S S, TIAN Y T, WAN C. Facial expression recognition method based on gabor multi-orientation features fusion and block histogram [J]. Acta Automatica Sinica, 2011, 37(12): 1455-1463.)

[5]BERRETTI S, del BIMBO A, PALA P, et al. A set of selected SIFT features for 3D facial expression recognition [C]// ICPR 2010: Proceedings of the 2010 20th International Conference on Pattern Recognition. Piscataway, NJ: IEEE, 2010: 4125-4128.

[6]CHEON Y, KIM D. Natural facial expression recognition using differential-AAM and manifold learning [J]. Pattern Recognition, 2009, 42(7): 1340-1350.

[7]尹星云,王洵,董蘭芳,等.用隱馬爾可夫模型設(shè)計(jì)人臉表情識別系統(tǒng)[J].電子科技大學(xué)學(xué)報(bào),2003, 32(6):725-728.(YIN X Y, WANG X, DONG L F, et al. Design of recognition for facial expression by hidden markov model [J]. Journal of University of Electronic Science and Technology of China, 2003, 32(6): 725-728.)

[8]VAPNIK V N, LERNER A Y. Recognition of patterns with help of generalized portraits [J]. Avtomatika I Telemekhanika, 1963, 24(6): 774-780.

[9]ROWEIS S T. Nonlinear dimensionality reduction by locally linear embedding [J]. Science, 2000, 290(5500): 2323-2326.

[10]HART P E. The condensed nearest neighbor rule [J]. IEEE Transactions on Information Theory, 1968, 14(3): 515-516.

[11]KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks [C]// NIPS ‘12: Proceedings of the 25th International Conference on Neural Information Processing Systems. North Miami Beach, FL, USA: Curran Associates, 2012: 1097-1105.

[12]LYONS M J, AKAMATSU S, KAMACHI M G, et al. Coding facial expressions with Gabor wavelets[C]// AFGR 1998: Proceedings of the 3rd IEEE International Conference on Automatic Face and Gesture Recognition. Piscataway, NJ: IEEE, 1998: 200-205.

[13]LUCEY P, COHN J F, KANADE T, et al. The extended Cohn-Kanade dataset (CK+): a complete dataset for action unit and emotion-specified expression [C]// CVPRW 2010: Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Washington, DC: IEEE Computer Society, 2010: 94-101.

[14]GOODFELLOW I J, ERHAN D, CARRIER P L, et al. Challenges in representation learning: a report on three machine learning contests [J]. Neural Networks, 2013, 64: 59-63.

[15]DHALL A, GOECKE R, LUCEY S, et al. Static facial expression analysis in tough conditions: data, evaluation protocol and benchmark [C]// ICCVW 2011: Proceedings of the 2011 IEEE International Conference on Computer Vision Workshops. Piscataway, NJ: IEEE, 2011: 2106-2112.

[16]TANG Y. Deep learning using linear support vector machines [EB/OL]. arXiv:1306.0239[2018-12-21]. https://arxiv.org/pdf/1306.0239.pdf.

[17]AL-SHABI M, CHEAH W P, CONNIE T. Facial expression recognition using a hybrid CNN-SIFT aggregator [EB/OL]. arXiv: 1608. 02833[2018-08-17]. https://arxiv.org/ftp/arxiv/papers/1608/1608.02833.pdf.

[18]FANG H, PARTHALIN N M, AUBREY A J, et al. Facial expression recognition in dynamic sequences: an integrated approach [J]. Pattern Recognition, 2014, 47(3): 1271-1281.

[19]JEON J, PARK J-C, JO Y J, et al. A real-time facial expression recognizer using deep neural network [C]// IMCOM ‘16: Proceedings of the 10th International Conference on Ubiquitous Information Management and Communication. New York: ACM, 2016: Article No. 94.

[20]NEHAL O, NOHA A, FAYEZ W. Intelligent real-time facial expression recognition from video sequences based on hybrid feature tracking algorithms [J]. International Journal of Advanced Computer Science and Applications, 2017, 8(1): 245-260.

[21]LIU W, ANGUELOV D, ERHAN D, et al. SSD: single shot multibox detector [C]// Proceedings of the 2016 European Conference on Computer Vision, LNCS 9905. Berlin: Springer, 2016: 21-37.

[22]HENRIQUES J F, CASEIRO R, MARTINS, et al. High-speed tracking with kernelized correlation filters [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 37(3): 583-596.

[23]SIMONYAN K, ZISSERMAN A. Very deep convolutional networks for large-scale image recognition [EB/OL]. arXiv: 1409. 1556[2019-01-10]. https://arxiv.org/pdf/1409.1556.pdf.

[24]HOWARD A G, ZHU M, CHEN B. et al. MobileNets: efficient convolutional neural networks for mobile vision applications [EB/OL]. arXiv: 1704. 04861[2018-12-17]. https://arxiv.org/pdf/1704.04861.pdf.

[25]SANDLER M, HOWARD A, ZHU M, et al. Inverted residuals and linear bottlenecks: mobile networks for classification, detection and segmentation [EB/OL]. arXiv:1801.04381[2018-12-16]. https://arxiv.org/pdf/1801.04381v2.pdf.

[26]HE K, ZHANG X, REN S, et al. Delving deep into rectifiers: surpassing human-level performance on ImageNet classification [EB/OL]. arXiv: 1502. 01852[2018-12-06]. https://arxiv.org/pdf/1502.01852.pdf.

[27]JARRETT K, KAVUKCUOGLU K, RANZATO M, et al. What is the best multi-stage architecture for object recognition? [C]// ICCV 2009: Proceedings of the IEEE 12th International Conference on Computer Vision. Piscataway, NJ: IEEE, 2009: 2146-2153.

[28]LIEW S S, KHALIL-HANI M, BAKHTERI R. Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems [J]. Neurocomputing, 2016, 216(C): 718-734.

[29]DJORK-ARN C, UNTERTHINER T, HOCHREITER S. Fast and accurate deep network learning by Exponential Linear Units (ELUs) [EB/OL]. arXiv:15222.07289[2019-01-22]. https://arxiv.org/pdf/1511.07289.pdf.

[30]YANG S, LUO P, LOY C C, et al. WIDER FACE: a face detection benchmark [C]// CVPR 2016: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2016: 5525-5533.

[31]DENG J, DONG W, SOCHER R, et al. ImageNet: a large-scale hierarchical image database [C]// CVPR 2009: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway, NJ: IEEE, 2009: 248-255.

[32]YANG S, LUO P, LOY C C, et al. From facial parts responses to face detection: a deep learning approach [C]// ICCV 2015: Proceedings of the 2015 IEEE International Conference on Computer Vision. Piscataway, NJ: IEEE, 2015: 3676-3684.

[33]ZHANG K, ZHANG Z, LI Z, et al. Joint face detection and alignment using multitask cascaded convolutional networks [J]. IEEE Signal Processing Letters, 2016, 23(10):1499-1503.

[34]SZEGEDY C, IOFFE S, VANHOUCKE V, et al. Inception-v4, Inception-ResNet and the impact of residual connections on learning [C]// AAAI 2017: Proceedings of the 31st AAAI Conference on Artificial Intelligence. Menlo Park, CA: AAAI Press, 2017: 23-38.

[35]GUO Y, TAO D, YU J, et al. Deep neural networks with relativity learning for facial expression recognition [C]// ICMEW 2016: Proceedings of the 2016 IEEE International Conference on Multimedia and Expo Workshops. Piscataway, NJ: IEEE, 2016: 1-6.

[36]YAN J, ZHENG W, CUI Z, et al. A joint convolutional bidirectional LSTM framework for facial expression recognition [J]. IEICE Transactions on Information and Systems, 2018, 101(4): 1217-1220.

[37]FERNANDEZ P D M, PEA F A G, REN T I, et al. FERAtt: facial expression recognition with attention net [EB/OL]. arXiv:1902.03284[2019-02-08]. https://arxiv.org/pdf/1902.03284.pdf.

[38]SONG X, BAO H. Facial expression recognition based on video [C]// AIPR 2017: Proceedings of the 2016 IEEE Applied Imagery Pattern Recognition Workshop. Washington, DC: IEEE Computer Society, 2016, 1: 1-5.

[39]ZHANG K, HUANG Y, DU Y, et al. Facial expression recognition based on deep evolutional spatial-temporal networks [J]. IEEE Transactions on Image Processing, 2017, 26(9): 4193-4203.

This work is partially supported by the National Natural Science Foundation of China (61771411),

the Sichuan Science and Technology Project (2019YJ0449),

the Graduate Innovation Fund of Southwest University of Science and Technology (18ycx123).

LI Minze, born in 1992, M. S. candidate. His research interests include deep learning, computer vision.

LI Xiaoxia, born in 1976, Ph. D., professor. Her research interests include pattern recognition, computer vision.

WANG Xueyuan, born in 1974, Ph. D., associate professor. His research interests include image processing, machine learning.

SUN Wei, born in 1995, M. S. candidate. His research interests include image processing, deep learning.

猜你喜歡
人臉檢測遷移學(xué)習(xí)卷積神經(jīng)網(wǎng)絡(luò)
奇異值分解與移移學(xué)習(xí)在電機(jī)故障診斷中的應(yīng)用
基于深度卷積神經(jīng)網(wǎng)絡(luò)的物體識別算法
基于人臉特征定位的SNS網(wǎng)站應(yīng)用組件研究與設(shè)計(jì)
基于Android平臺的人臉識別系統(tǒng)設(shè)計(jì)與實(shí)現(xiàn)
深度學(xué)習(xí)算法應(yīng)用于巖石圖像處理的可行性研究
基于Matlab的人臉檢測實(shí)驗(yàn)設(shè)計(jì)
基于深度卷積網(wǎng)絡(luò)的人臉年齡分析算法與實(shí)現(xiàn)
基于卷積神經(jīng)網(wǎng)絡(luò)的樹葉識別的算法的研究
一種基于遷移極速學(xué)習(xí)機(jī)的人體行為識別模型
大數(shù)據(jù)環(huán)境下基于遷移學(xué)習(xí)的人體檢測性能提升方法
枣阳市| 确山县| 泰和县| 鹰潭市| 怀宁县| 泾源县| 宣恩县| 正镶白旗| 吉木萨尔县| 南木林县| 盐源县| 庄河市| 武陟县| 达孜县| 肥乡县| 五常市| 荣成市| 南投市| 上高县| 徐闻县| 禄丰县| 河津市| 芦溪县| 金乡县| 安吉县| 鄂托克旗| 正蓝旗| 彩票| 晴隆县| 绥阳县| 康平县| 永靖县| 宜阳县| 广南县| 淮安市| 临潭县| 建宁县| 波密县| 高雄县| 舒兰市| 章丘市|