国产日韩欧美一区二区三区三州_亚洲少妇熟女av_久久久久亚洲av国产精品_波多野结衣网站一区二区_亚洲欧美色片在线91_国产亚洲精品精品国产优播av_日本一区二区三区波多野结衣 _久久国产av不卡

?

基于改進(jìn)判別區(qū)域特征融合算法的近色背景綠色桃子識(shí)別

2018-11-23 07:03黃小玉李光林楊士航
關(guān)鍵詞:識(shí)別率桃子像素

黃小玉,李光林,馬 馳,楊士航

?

基于改進(jìn)判別區(qū)域特征融合算法的近色背景綠色桃子識(shí)別

黃小玉,李光林※,馬 馳,楊士航

(西南大學(xué)工程技術(shù)學(xué)院,重慶 400715)

針對(duì)機(jī)器視覺識(shí)別中自然光照條件下未成熟綠色果實(shí)的識(shí)別存在顏色與背景相似、光照不均、果葉遮擋等問題,該文提出在判別區(qū)域特征集成(discriminative regional feature integration,DRFI)算法框架的基礎(chǔ)上,結(jié)合顏色、紋理、形狀特征,對(duì)未成熟綠色桃子進(jìn)行識(shí)別。首先通過基于圖的圖像分割(graph-based image segmentation)算法,取不同的參數(shù)將圖像分割為多層,再計(jì)算各層圖像的顯著圖,并用線性組合器將其融合,得到DRFI顯著圖。再用OTSU算法得到的閾值自適應(yīng)調(diào)整之后對(duì)DRFI顯著圖進(jìn)行分割,減少了顯著圖中識(shí)別為低概率果實(shí)的誤分割。對(duì)于分割后仍存在的果實(shí)相互粘連的情況,通過控制標(biāo)記符和距離變換相結(jié)合的分水嶺分割算法將其分開。試驗(yàn)結(jié)果表明:該方法在訓(xùn)練集中的準(zhǔn)確識(shí)別率為91.7%,在驗(yàn)證集中的準(zhǔn)確識(shí)別率為88.3%,與相關(guān)文獻(xiàn)報(bào)道的結(jié)果以及原始DRFI算法在驗(yàn)證集中的檢測(cè)結(jié)果相比,該文方法的準(zhǔn)確識(shí)別率提高了3.7~10.7個(gè)百分點(diǎn),較有效地解決了顏色相近和果葉遮擋問題,可為果樹早期估產(chǎn)和綠色果實(shí)采摘自動(dòng)化、智能化提供參考。

機(jī)器視覺;圖像處理;算法;桃子;顯著對(duì)象檢測(cè);特征提??;分水嶺變換;識(shí)別

0 引 言

將機(jī)器視覺的方法用于果樹早期估產(chǎn)以及農(nóng)業(yè)機(jī)器人的對(duì)象識(shí)別是近年來的研究熱點(diǎn)[1]。果樹早期估產(chǎn)能提供給種植者對(duì)于產(chǎn)量分布和質(zhì)量的清晰認(rèn)知,并對(duì)施肥、噴藥、采摘、倉儲(chǔ)等過程進(jìn)行調(diào)整,以促進(jìn)資源的有效利用[2-3];果蔬采摘機(jī)器人實(shí)現(xiàn)了采摘機(jī)械化和自動(dòng)化,降低了勞動(dòng)和時(shí)間成本,使采摘效率大幅度提升[4]。果樹早期估產(chǎn)的對(duì)象是未成熟的綠色果實(shí),采摘機(jī)器人的采摘對(duì)象也不乏香梨、青蘋果等綠色水果,且果實(shí)的精確識(shí)別與定位是機(jī)器采摘果實(shí)的關(guān)鍵[5]。因此,研究綠色果實(shí)的精確識(shí)別方法十分重要。

自然環(huán)境下采集的水果圖像存在光照不均、陰影、葉片反光、枝葉遮擋及果實(shí)之間的相互遮擋[6-7]等問題。且未成熟果實(shí)本身呈現(xiàn)綠色,與葉片、雜草顏色相近,無法單用顏色特征區(qū)分[8]。目前國(guó)內(nèi)外相關(guān)文獻(xiàn)針對(duì)上述問題提出了多種解決方法。Bansal等[9]提出基于快速傅里葉變換(fast Fourier transform,F(xiàn)FT)泄露的檢測(cè)方法識(shí)別自然光照條件下采集的綠色柑橘圖像,識(shí)別準(zhǔn)確率為82.2%。盧軍等[10-11]提出利用水果表面呈環(huán)形光照分布的輪廓特征,結(jié)合Hough變換進(jìn)行圓擬合,該算法在20幅柑橘果園場(chǎng)景圖像的召回率達(dá)到81.2%,采用LBP(local binary pattern)紋理特征后,在25幅圖像上測(cè)試的準(zhǔn)確率能達(dá)到82.3%,但圖像采集過程需要人工配置光源。Gan等[12]提出將彩色圖像和熱圖像信息融合進(jìn)行未成熟柑橘的檢測(cè),召回率達(dá)到90.4%,但柑橘類水果僅在清晨能達(dá)到最佳熱成像,對(duì)圖像采集時(shí)間有依賴,且熱成像設(shè)備價(jià)格相對(duì)高昂。馬翠花等[13]將密集和稀疏重構(gòu)[14](dense and sparse reconstruction,DSR)的顯著性檢測(cè)方法用于識(shí)別未成熟番茄,正確識(shí)別率為77.6%。該研究采用無監(jiān)督的顯著性檢測(cè)方法,識(shí)別率相對(duì)較低,強(qiáng)光下漏檢或誤檢的情況相對(duì)弱光增加。文獻(xiàn)[15-20]采用諸如神經(jīng)網(wǎng)絡(luò)、隨機(jī)森林等分類器對(duì)未成熟的番茄、蘋果、芒果、葡萄等水果進(jìn)行檢測(cè),均取得了較好的效果,但鮮少有對(duì)未成熟綠色桃子的檢測(cè)[21],不同水果背景及果實(shí)形狀的差異大,上述方法直接用于桃子檢測(cè)效果不理想,桃子作為西南地區(qū)特色水果之一,生產(chǎn)管理也亟需實(shí)現(xiàn)自動(dòng)化和智能化。

針對(duì)當(dāng)前綠色桃子機(jī)器視覺識(shí)別研究工作的缺失,本文基于判別區(qū)域特征融合算法(discriminative regional feature integration,DRFI)[22-23]的框架和分水嶺分割算法,提出綠色桃子的機(jī)器視覺識(shí)別方法。DRFI顯著性檢測(cè)算法采用有監(jiān)督的方式進(jìn)行模型訓(xùn)練,比DSR算法性能更優(yōu)異。用綠色桃子特有的顏色、紋理、形狀特征代替DRFI算法采用的部分特征,同時(shí)對(duì)相應(yīng)的參數(shù)進(jìn)行調(diào)整,以更適用于計(jì)算未成熟綠色桃子的顯著圖;針對(duì)顯著圖OTSU分割和數(shù)學(xué)形態(tài)學(xué)處理去噪后圖像中存在的果實(shí)粘連情況,用控制標(biāo)記符和距離變換相結(jié)合的分水嶺分割算法進(jìn)一步分割,從而實(shí)現(xiàn)樹上綠色桃子的有效識(shí)別,以期為果樹早期估產(chǎn)和綠色果實(shí)自動(dòng)采摘提供參考。

1 材料與方法

1.1 圖像采集

2018年4月21日于重慶市沙坪壩區(qū)虎峰山果園內(nèi)進(jìn)行圖像采集,用Panasonic DMC_LX5GK 型號(hào)的數(shù)碼相機(jī)在自然光照條件下拍攝桃子早熟品種胭脂脆桃圖像,共采集186幅,包括順光、逆光、遮擋等環(huán)境。相機(jī)鏡頭距離果實(shí)30~80 cm,圖像分辨率為2 560×1 920像素,存儲(chǔ)為JPG格式。為滿足實(shí)時(shí)檢測(cè)的要求并避免圖像失真,多次試驗(yàn)并結(jié)合相關(guān)文獻(xiàn)[10-11,13],采用雙3次插值法將圖像縮小為原始圖像大小的1/8,即320×240像素。將采集到的186幅圖像隨機(jī)抽取150幅作為訓(xùn)練集,另外36幅作為驗(yàn)證集。本文所有算法運(yùn)行環(huán)境為MATLAB R2015b,計(jì)算機(jī)配置為Intel(R) Xeon(R) CPU E5-2609 v2@2.50GHZ,RAM為16.0GB。

1.2 樣本標(biāo)注

DRFI算法訓(xùn)練回歸模型需要原始圖像的ground truth圖作為標(biāo)簽。采用Photoshop CS6標(biāo)注150幅訓(xùn)練集圖像的ground truth圖,將目標(biāo)像素標(biāo)記為白色,即灰度值為255,背景像素標(biāo)記為黑色,即灰度值為0。這樣原始圖像在基于圖的圖像分割后產(chǎn)生的每個(gè)超像素都會(huì)對(duì)應(yīng)一個(gè)區(qū)域級(jí)標(biāo)簽。圖1是150幅訓(xùn)練集中的6幅桃子圖像及其對(duì)應(yīng)的ground truth標(biāo)簽示例。

圖1 桃子圖像及其對(duì)應(yīng)的ground truth標(biāo)簽示例

2 DRFI算法原理及改進(jìn)

2.1 算法描述

圖2展示了本文DRFI算法計(jì)算桃子圖像顯著圖的主要步驟,具體算法如下:

1)多層分割。給定一幅圖像,使用基于圖的圖像分割(graph-based image segmentation)算法[24]將圖像分割為個(gè)超像素區(qū)域,每個(gè)超像素用不同的RGB顏色定義。單次圖像分割產(chǎn)生的超像素可能跨越顯著對(duì)象(桃子)的邊界,同時(shí)由顯著對(duì)象和背景像素組成;超像素可能太小而沒有包含足夠的特征去判斷它是否屬于顯著對(duì)象。因此,采用多層分割,通過改變基于圖的圖像分割算法中sigma(高斯濾波器核函數(shù)的標(biāo)準(zhǔn)差)、(控制合并后區(qū)域的數(shù)量)和min(當(dāng)分割后的區(qū)域像素點(diǎn)的個(gè)數(shù)小于min時(shí),選擇與其差異最小的區(qū)域合并)3個(gè)參數(shù)的取值,生成不同的圖像分割結(jié)果。共取組參數(shù),將圖像分為層S(=1,2,???,),={1,2,…,S}。

2)各層圖像顯著性計(jì)算。每一層分割產(chǎn)生的超像素由26維的特征向量表示,包括2個(gè)顏色特征,16個(gè)紋理特征,8個(gè)形狀特征。將圖像的各層分割結(jié)果={1,2,…,S}與ground truth圖進(jìn)行匹配,得到超像素對(duì)應(yīng)的標(biāo)簽,將桃子標(biāo)記為1,背景標(biāo)記為-1。將150幅訓(xùn)練集圖像的特征向量及標(biāo)簽輸入到隨機(jī)森林進(jìn)行訓(xùn)練,得到回歸模型=(),從而將特征向量映射為顯著值,并分配給對(duì)應(yīng)的超像素區(qū)域,這樣區(qū)域中的每個(gè)像素,都會(huì)對(duì)應(yīng)產(chǎn)生該像素的顯著性概率。個(gè)超像素一一映射的結(jié)果便得到每一層分割的顯著圖。

3)多層顯著圖融合。通過將層顯著圖融合在一起生成最終的顯著圖,用A表示訓(xùn)練集中第幅圖像的顯著圖,A=(A1, …,A),其中A表示第幅圖第層分割的顯著圖,(·)是一個(gè)線性組合器,如式(1)所示。

式中w表示權(quán)重,學(xué)習(xí)過程采用最小二乘法估計(jì),使損失總和最小化,即使(2)式達(dá)到最小,為訓(xùn)練集圖 像總數(shù),=150,*為第幅訓(xùn)練集圖像對(duì)應(yīng)的ground truth圖。

注:圖中“多層分割”中的從上到下的3張圖的sigma, k, min取值分別為[0.8,100,150]、[0.9,200,200]和[0.8,300,150]。

2.2 區(qū)域特征分析與特征選取

原始DRFI算法提出3種區(qū)域特征:對(duì)比度特征、背景描述特征、目標(biāo)描述特征。其中,對(duì)比度特征的重要性不如后兩者,背景描述特征則認(rèn)為圖像邊界區(qū)域的像素屬于背景。事實(shí)上,自然條件下采集的水果圖像,果實(shí)隨機(jī)散布在圖像的各個(gè)區(qū)域,無規(guī)律可言。另一方面,顯著性算法用于識(shí)別圖像中最突出的物體,該物體可以是所有類型,不針對(duì)特定對(duì)象。因此,將DRFI算法用于檢測(cè)未成熟綠色桃子這一特定對(duì)象時(shí),原始的區(qū)域特征并不完全適用。

本文基于DRFI算法的框架,只保留目標(biāo)描述特征中的16個(gè)紋理特征,在此基礎(chǔ)上,增加綠色桃子特有的特征,即RGB顏色空間中-分量的均值,HSV顏色空間中Hue分量的均值,和區(qū)域的面積、周長(zhǎng)、圓形度、長(zhǎng)軸長(zhǎng)度、短軸長(zhǎng)度、長(zhǎng)寬比、長(zhǎng)軸長(zhǎng)度與周長(zhǎng)比、離心率這8個(gè)形狀特征,共26個(gè)特征用于訓(xùn)練回歸模型。

2.2.1 顏色特征

識(shí)別顏色與背景相近的綠色目標(biāo)中,將RGB顏色空間中的-分量、和HSV顏色空間中的Hue分量作為有效特征,或?yàn)榱吮苊夤庹兆兓挠绊懀葘?duì)圖像做增強(qiáng)處理,再提取上述特征分量[4,25]。圖3分別為順光和逆光環(huán)境下拍攝的桃子圖像及其對(duì)應(yīng)的-分量、Hue分量和圖像增強(qiáng)后的-分量、Hue分量。圖3c和圖3h是對(duì)圖像進(jìn)行限制對(duì)比度自適應(yīng)直方圖均衡化后,提取的-色差圖。圖3e和圖3j是將RGB圖的、、各分量單獨(dú)進(jìn)行直方圖均衡化后融合,再轉(zhuǎn)化為HSV模型,并提取Hue分量。

由圖3順光和逆光圖像的對(duì)比可以看出,-分量和Hue分量在圖像增強(qiáng)前后均能較好地將順光環(huán)境下的目標(biāo)和背景區(qū)分開,逆光環(huán)境下,即使經(jīng)過圖像增強(qiáng),也無法有效地區(qū)分桃子和背景。因此,僅提取顏色特征不可靠,需加入其他類型的特征。

2.2.2 紋理特征

本文采用Leung-Malik(LM)濾波器組響應(yīng)的方差,和等價(jià)模式LBP特征[26]的方差作為區(qū)域紋理特征。

LBP指局部二值模式,用來描述圖像局部特征,以中心像素的灰度值為閾值,若相鄰像素點(diǎn)灰度值大于或等于中心像素點(diǎn),該相鄰像素點(diǎn)標(biāo)記為1,否則為0,以周圍0-1序列排列形成的二進(jìn)制數(shù)的數(shù)值作為中心像素的LBP值。因此,像素點(diǎn)個(gè)數(shù)為,半徑為的圓形鄰域?qū)?huì)產(chǎn)生2種模式。為解決模式過多的問題,采用等價(jià)模式LBP,引入變量,如式(3)所示。規(guī)定當(dāng)某個(gè)LBP所對(duì)應(yīng)的循環(huán)二進(jìn)制數(shù)從0到1或從1到0最多有2次跳變,即≤2,該LBP所對(duì)應(yīng)的二進(jìn)制就稱為一個(gè)等價(jià)模式類,其余模式稱為混合模式類。至此,模式數(shù)量減少為(-1)+2種。等價(jià)模式LBP值的計(jì)算如式(4)所示。

式中為符號(hào)函數(shù),g為鄰域像素的灰度值,本文選取鄰域像素點(diǎn)個(gè)數(shù)為8,其8鄰域內(nèi)8個(gè)像素的灰度值分別為g={0,1,…,7};g為中心像素的灰度值;半徑取2;上標(biāo)riu2表示使用值最高為2的旋轉(zhuǎn)不變的等價(jià)模式LBP。

a. 順光圖像 a. Sunny side imageb. 順光下的R-B圖 b. R-B image under sunny sidec. 順光下增強(qiáng)后的R-B圖 c. Enhanced R-B image under sunny side d. 順光下的Hue圖 d. Hue image under sunny sidee. 順光下增強(qiáng)后的Hue圖 e. Enhanced Hue image under sunny sidef. 逆光圖像 f. Shadow side image

2.2.3 形狀特征

桃子的輪廓形狀接近橢圓,與葉片和枝干等背景的形狀明顯不同,因此,在顏色和紋理特征的基礎(chǔ)上,提取分割區(qū)域的形狀特征,如表1所示。這8個(gè)形狀特征可直接或間接通過Matlab函數(shù)regionprops求出。

表1 區(qū)域形狀特征描述

2.3 算法參數(shù)選擇

針對(duì)訓(xùn)練集圖像數(shù)量和特征個(gè)數(shù)的改變,對(duì)訓(xùn)練過程中的部分參數(shù)進(jìn)行調(diào)整。參考原始DRFI算法和多次試驗(yàn)所得的經(jīng)驗(yàn),多層分割時(shí),本文將訓(xùn)練集圖像分為25層,考慮到精度和速度之間的平衡,將驗(yàn)證集圖像分為15層。訓(xùn)練隨機(jī)森林回歸模型時(shí),測(cè)試了采用50、100、200棵決策樹,單個(gè)決策樹使用特征的最大數(shù)量分別為5、10、15、26時(shí)的性能,試驗(yàn)結(jié)果表明,當(dāng)選取100棵決策樹,最大特征數(shù)取10時(shí),性能最優(yōu),得到的顯著圖與ground truth圖最吻合。

3 顯著圖處理

顯著圖表征圖中每個(gè)像素屬于顯著對(duì)象的概率,為灰度圖,還需進(jìn)一步處理才能實(shí)現(xiàn)果實(shí)檢測(cè)。

3.1 OTSU分割

如圖4b所示,本文算法認(rèn)為某些果實(shí)區(qū)域?qū)儆诠麑?shí)的概率較低,在顯著圖中表現(xiàn)為較小的灰度值。圖4中上下2行圖像用OTSU算法[27]計(jì)算得到的自適應(yīng)閾值()分別為0.486 3和0.494 1,如果直接以閾值分割顯著圖,會(huì)將低灰度值的桃子分割為背景,如圖4c所示。多次試驗(yàn)得出,分別用閾值和-0.1分割得到二值圖,并以后者減去前者,然后去除差值圖中面積小于500像素的區(qū)域。若剩下各區(qū)域的圓形度小于0.57,或長(zhǎng)寬比大于2.65,說明以閾值-0.1分割引入了背景部分,應(yīng)直接以為閾值進(jìn)行分割,否則,以-0.1為閾值進(jìn)行分割。

3.2 數(shù)學(xué)形態(tài)學(xué)處理

3.2.1 小區(qū)域去除

圖5為整個(gè)顯著圖處理過程的示例。如圖5b所示,經(jīng)過二值化后的圖像存在與果實(shí)相連的細(xì)小的突出部分和小面積殘留區(qū)域,圖5a的分割閾值為0.403 9。針對(duì)細(xì)小的突刺,采用開運(yùn)算斷開其與目標(biāo)之間狹窄的連接,平滑目標(biāo)的輪廓,開運(yùn)算的結(jié)構(gòu)元素選取半徑為5的圓盤。針對(duì)殘留的小面積區(qū)域,選出整個(gè)數(shù)據(jù)集中最小的10個(gè)桃子,用Photoshop逐像素將其標(biāo)出,用matlab統(tǒng)計(jì)出其平均像素?cái)?shù)為584,考慮到可能存在的誤差,實(shí)際上將圖像中面積小于500像素的區(qū)域去除。圖5c為閾值分割后經(jīng)過開運(yùn)算和小區(qū)域去除后的結(jié)果。

a.原圖 a. Original imageb. 顯著圖 b. Saliency mapc. OTSU閾值 c. OTSU thresholdd. OTSU閾值調(diào)整 d. OTSU threshold adjustment

3.2.2 控制標(biāo)記符和距離變換相結(jié)合的分水嶺分割

自然條件下水果總是成簇生長(zhǎng),經(jīng)過OTSU分割和簡(jiǎn)單的小區(qū)域去除后,圖像表現(xiàn)為目標(biāo)為白色,背景為黑色的二值圖像,白色區(qū)域仍然存在粘連,用控制標(biāo)記符和距離變換相結(jié)合的分水嶺分割算法[28-30]將其分開。首先對(duì)二值圖像求補(bǔ)后進(jìn)行距離變換,并將結(jié)果取反,如圖5d所示。為克服傳統(tǒng)分水嶺分割每個(gè)局部極小值均為匯水盆地而造成的過分割現(xiàn)象,使用imextendedmin函數(shù),將圖5d中深度不超過值的局部極小值濾除,從而得到內(nèi)部標(biāo)記符的集合如圖5e所示(圖5e中的白色小區(qū)域才是真正的內(nèi)部標(biāo)記符,為了方便觀察將內(nèi)部標(biāo)記符疊加到圖5c上顯示),本文的深度閾值取為2。使用強(qiáng)制最小技術(shù)修正距離圖后再進(jìn)行分水嶺變換分割后的結(jié)果如圖5f所示。然后取每個(gè)分離區(qū)域的中心為圓心,以與區(qū)域有相同二階中心矩的橢圓的長(zhǎng)軸長(zhǎng)度為直徑[25],在原圖上畫出檢測(cè)到的桃子,如圖5g所示。

a. 顯著圖a. Saliency mapb. OTSU分割b. OTSU segmentationc. 小區(qū)域去除 c. Small area removald. 距離變換 d. Distance transformatione. 內(nèi)部標(biāo)記符 e. Internal markersf. 分水嶺分割 f. Watershed segmentationg. 檢測(cè)結(jié)果 g. Detection result

4 試驗(yàn)與結(jié)果分析

為驗(yàn)證本文方法的有效性,共做了2部分試驗(yàn),首先對(duì)采集到的186幅圖像進(jìn)行本文方法的檢測(cè);其次對(duì)本文方法和原始DRFI算法[22]在36幅驗(yàn)證集圖像(分為順光場(chǎng)景圖像和逆光場(chǎng)景圖像)中的檢測(cè)結(jié)果進(jìn)行對(duì)比,并與馬翠花等[13]、Kurtulmus等[21]文獻(xiàn)中報(bào)道的結(jié)果進(jìn)行對(duì)比。

4.1 本文算法試驗(yàn)結(jié)果

對(duì)訓(xùn)練集和驗(yàn)證集中的所有圖像進(jìn)行了檢測(cè),包括順光、逆光、枝葉遮擋和果實(shí)重疊的場(chǎng)景。用本文提出的改進(jìn)的DRFI算法得到顯著圖后,以圖5所示過程進(jìn)行處理,部分檢測(cè)結(jié)果如圖6所示??煽吹奖疚姆椒ㄔ陧樄?、逆光場(chǎng)景下,均能較好地識(shí)別出綠色桃子,對(duì)于枝葉遮擋和果實(shí)重疊也能較有效處理。

圖6 不同場(chǎng)景下的檢測(cè)結(jié)果

表2是本文算法在150幅訓(xùn)練集圖像,36幅驗(yàn)證集圖像上的檢測(cè)結(jié)果,對(duì)圖像中遮擋不超過1/3的水果進(jìn)行檢測(cè)計(jì)數(shù)。訓(xùn)練集圖像共315個(gè)果實(shí),實(shí)際檢測(cè)到289個(gè),誤檢44個(gè),漏檢26個(gè),正確識(shí)別率為91.7%;驗(yàn)證集圖像共77個(gè)果實(shí),實(shí)際檢測(cè)到68個(gè),誤檢12個(gè),漏檢9個(gè),正確識(shí)別率為88.3%。

表2 本文算法檢測(cè)結(jié)果

注:訓(xùn)練集和驗(yàn)證集果實(shí)總數(shù)分別為315和77個(gè)。

Note: Total fruits in training and validation dataset are 315 and 77 fruits respectively.

4.2 不同方法檢測(cè)結(jié)果對(duì)比

如圖7為本文改進(jìn)的算法與原始DRFI算法[22]、DSR算法[14]計(jì)算出的顯著圖的對(duì)比。如圖7b的DRFI算法認(rèn)為光照較強(qiáng)處顯著性更突出,因此,直接運(yùn)用顯著性檢測(cè)易受到光照變化的影響;如圖7c的DSR算法認(rèn)為目標(biāo)位于圖像中間位置,因此將接近圖像中間位置的果梗、樹葉等背景誤判為目標(biāo),稍邊緣的果實(shí)區(qū)域顯著性反而不突出。而本文提出的方法由于只用了DRFI算法的框架,提取的是綠色桃子特有的顏色、紋理、形狀特征,克服了DSR和DRFI顯著性檢測(cè)算法的不足,使目標(biāo)和背景能夠較大程度地分離。

a. 原圖 a. Original imageb. DRFI顯著圖 b. DRFI saliency map c. DSR顯著圖 c. DSR saliency mapd. 本文方法顯著圖 d. Saliency map of proposed method

本文將36幅驗(yàn)證集圖像分為順光場(chǎng)景圖像和逆光場(chǎng)景圖像,進(jìn)行本文改進(jìn)的算法與原始DRFI算法[22]識(shí)別結(jié)果的對(duì)比分析,以及與馬翠花等[13]、Kurtulmus等[21]文獻(xiàn)中報(bào)道結(jié)果的對(duì)比。4種方法檢測(cè)的結(jié)果如表3所示。

本文算法在驗(yàn)證集上的準(zhǔn)確識(shí)別率為88.3%,對(duì)比未改進(jìn)的原始DRFI算法80.5%的正確識(shí)別率,本文算法的正確識(shí)別率提高了7.8個(gè)百分點(diǎn)。對(duì)比同樣是用顯著性檢測(cè)算法識(shí)別綠色水果的馬翠花等提出的基于DSR的顯著性檢測(cè)方法識(shí)別未成熟番茄77.6%的正確識(shí)別率,本文方法的正確識(shí)別率提高了10.7個(gè)百分點(diǎn)。對(duì)比同是檢測(cè)未成熟綠色桃子的Kurtulmus等提出的算法在驗(yàn)證集中最高84.6%的正確識(shí)別率,本文算法的正確識(shí)別率提高了3.7個(gè)百分點(diǎn)。

表3數(shù)據(jù)顯示,文獻(xiàn)[13,21-22]方法在順光環(huán)境下的正確識(shí)別率均低于逆光環(huán)境,而本文方法在順光和逆光場(chǎng)景下的正確識(shí)別率分別為88.6%和88.1%,基本相當(dāng)。

表3 不同方法驗(yàn)證集檢測(cè)結(jié)果對(duì)比

同時(shí),Kurtulmus等的方法處理驗(yàn)證集中單幅圖像用時(shí)72.8~112 s,達(dá)不到實(shí)時(shí)檢測(cè)的要求,而本文用訓(xùn)練好的模型處理驗(yàn)證集中單幅圖像用時(shí)3.16~4.58 s,包括顯示檢測(cè)結(jié)果的時(shí)間,處理時(shí)間最少能減少68.22 s。

5 結(jié) 論

為解決未成熟綠色桃子識(shí)別中顏色與背景相近及枝葉遮擋、果實(shí)重疊,易受光照變化影響的問題,本文提出基于DRFI(discriminative regional feature integration)顯著性檢測(cè)算法的框架和控制標(biāo)記符與距離變換相結(jié)合的分水嶺分割算法識(shí)別未成熟綠色桃子的方法。該方法首先以綠色桃子特有的顏色、紋理、形狀特征替換原始DRFI中的部分特征,并對(duì)相應(yīng)參數(shù)進(jìn)行調(diào)整后訓(xùn)練回歸模型。以該模型計(jì)算桃子圖像的顯著圖,然后用OTSU算法進(jìn)行初步分割,用數(shù)學(xué)形態(tài)學(xué)處理去噪后,采用控制標(biāo)記符與距離變換相結(jié)合的分水嶺分割算法將粘連的果實(shí)區(qū)域分割開,從而實(shí)現(xiàn)自然環(huán)境下未成熟綠色桃子的識(shí)別。試驗(yàn)結(jié)果表明該方法在訓(xùn)練集中的正確識(shí)別率為91.7%,在驗(yàn)證集中的正確識(shí)別率為88.3%,在順光、逆光、遮擋重疊環(huán)境下均能較準(zhǔn)確地識(shí)別出果實(shí)區(qū)域,受光照變化影響較小。本文方法為果實(shí)早期估產(chǎn)和果蔬采摘機(jī)器人采摘綠色水果提供了一種解決思路,但提取的區(qū)域特征仍然是基于人工設(shè)計(jì)的特征。今后將進(jìn)一步研究卷積神經(jīng)網(wǎng)絡(luò)應(yīng)用于檢測(cè)未成熟綠色水果。

[1] He Z L, Xiong J T, Lin R, et al. A method of green litchi recognition in natural environment based on improved LDA classifier[J]. Computers and Electronics in Agriculture, 2017, 140(8): 159-167.

[2] Linker R, Cohen O, Naor A. Determination of the number of green apples in RGB images recorded in orchards[J]. Computers and Electronics in Agriculture, 2012, 81(1): 45-57.

[3] Li H, Lee W S, Wang K. Identifying blueberry fruit of different growth stages using natural outdoor color images[J]. Computers and Electronics in Agriculture, 2014, 106(8): 91-101.

[4] 趙川源. 基于圖像和光譜技術(shù)的果實(shí)識(shí)別與病害檢測(cè)方法研究[D]. 楊凌:西北農(nóng)林科技大學(xué),2017.Zhao Chuanyuan. Detection Methods of Fruit Maturity and Diseases Based on Image and Spectral Techniques[D]. Yang ling: Northwest A&F University, 2017. (in Chinese with English abstract)

[5] 劉繼展. 溫室采摘機(jī)器人技術(shù)研究進(jìn)展分析[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2017,48(12): 1-18. Liu Jizhan. Research progress analysis of robotic harvesting technologies in greenhouse[J]. Transactions of the Chinese Society for Agricultural Machinery, 2017, 48(12): 1-18. (in Chinese with English abstract)

[6] Lu J, Sang N. Detecting citrus fruits and occlusion recovery under natural illumination conditions[J]. Computers and Electronics in Agriculture, 2015,110(1): 121-130.

[7] 王丹丹,徐越,宋懷波,等. 融合K-means與Ncut算法的無遮擋雙重疊蘋果目標(biāo)分割與重建[J]. 農(nóng)業(yè)工程學(xué)報(bào),2015,31(10):227-234.Wang Dandan, Xu Yue, Song Huaibo, et al. Fusion of K-means and Ncut algorithm to realize segmentation and reconstruction of two overlapped apples without blocking by branches and leaves[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2015, 31(10): 227-234. (in Chinese with English abstract)

[8] Li H, Lee W S, Wang K. Immature green citrus fruit detection and counting based on fast normalized cross correlation (FNCC) using natural outdoor colour images[J]. Precision Agriculture, 2016,17(6): 678-697.

[9] Bansal R, Lee W S, Satish S. Green citrus detection using fast Fourier transform (FFT) leakage[J]. Precision Agriculture, 2013, 14(1): 59-70.

[10] 盧軍,胡秀文. 弱光復(fù)雜背景下基于MSER和HCA的樹上綠色柑橘檢測(cè)[J].農(nóng)業(yè)工程學(xué)報(bào),2017,33(19):196-201. Lu Jun, Hu Xiuwen. Detecting green citrus fruit on trees in low light and complex background based on MSER and HCA[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2017, 33(19): 196-201. (in Chinese with English abstract)

[11] Lu J, Lee W S, Gan H, et al. Immature citrus fruit detection based on local binary pattern feature and hierarchical contour analysis[J]. Biosystems Engineering, 2018, 171(7): 78-90.

[12] Gan H, Lee W S, Alchanatis V, et al. Immature green citrus fruit detection using color and thermal images[J]. Computers and Electronics in Agriculture,2018,152(9): 117-125.

[13] 馬翠花,張學(xué)平,李育濤,等. 基于顯著性檢測(cè)與改進(jìn)Hough變換方法識(shí)別未成熟番茄[J].農(nóng)業(yè)工程學(xué)報(bào),2016,32(14):219-226. Ma Cuihua, Zhang Xueping, Li Yutao, et al. Identification of immature tomatoes base on salient region detection and improved Hough transform method[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2016, 32(14): 219-226. (in Chinese with English abstract)

[14] Li X, Lu H, Zhang L, et al. Saliency detection via dense and sparse reconstruction[C]// IEEE International Conference on Computer Vision (ICCV), Sydney, 2013: 2976-2983.

[15] Yamamoto K, Guo W, Yoshioka Y, et al. On plant detection of intact tomato fruits using image analysis and machine learning methods[J]. Sensors, 2014, 14(7): 12191-12206.

[16] Bargoti S, Underwood J P. Image segmentation for fruit detection and yield estimation in apple orchards[J]. Journal of Field Robotics, 2017, 34(6): 1039-1060.

[17] Sengupta S, Lee W S. Identification and determination of the number of immature green citrus fruit in a canopy under different ambient light conditions[J]. Biosystems Engineering, 2014, 117(1): 51-61.

[18] Pothen Z S, Nuske S. Texture-based fruit detection via images using the smooth patterns on the fruit[C]// IEEE International Conference on Robotics and Automation, 2016: 5171-5176.

[19] Stein M, Bargoti S, Underwood J. Image based mango fruit detection, localisation and yield estimation using multiple view geometry[J]. Sensors, 2016, 16(11): 1915.

[20] 薛月菊,黃寧,涂淑琴,等. 未成熟芒果的改進(jìn)YOLOv2識(shí)別方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(7): 173-179. Xue Yueju, Huang Ning, Tu Shuqin, et al. Immature mango detection based on improved YOLOv2[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(7): 173-179. (in Chinese with English abstract)

[21] Kurtulmus F, Lee W S, Vardar A. Immature peach detection in colour images acquired in natural illumination conditions using statistical classifiers and neural network[J]. Precision Agriculture, 2014, 15(1): 57-79.

[22] Wang Jingdong, Jiang Huaizu, Yuan Zejian, et al. Salient object detection: A discriminative regional feature integration approach[J]. International Journal of Computer Vision, 2017, 123(2): 251-268.

[23] Jiang Huaizu, Wang Jingdong, Yuan Zejian, et al. Salient object detection: A discriminative regional feature integration approach[C]// IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway, 2013: 2083-2090.

[24] Felzenszwalb P F, Huttenlocher D P. Efficient graph-based image segmentation[J]. International Journal of Computer Vision, 2004, 59(2): 167-181.

[25] Zhao C, Lee W S, He D. Immature green citrus detection based on colour feature and sum of absolute transformed difference (SATD) using colour images in the citrus grove[J]. Computers and Electronics in Agriculture, 2016,124(6): 243-253.

[26] Ojala T, Pietik?inen M, M?enp?? T. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002, 24(7): 971-987.

[27] Gonzalez R C, Woods R E. Digital Image Processing Using MATLAB, Second Edition[M]. Beijing: Publishing House of Electronics Industry, 2013.

[28] Choi D, Lee W S, Ehsani R, et al. A machine vision system for quantification of citrus fruit dropped on the ground under the canopy[J]. Transactions of the ASABE, 2015, 58(4): 933-946.

[29] 張春龍,張楫,張俊雄,等. 近色背景中樹上綠色蘋果識(shí)別方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(10):277-281. Zhang Chunlong, Zhang Ji, Zhang Junxiong, et al. Recognition of green apple in similar background[J]. Transactionsof the Chinese Society for Agricultural Machinery, 2014, 45(10): 227-281. (in Chinese with English abstract)

[30] Shin J S, Lee W S, Ehsani R. Postharvest citrus mass and size estimation using a logistic classification model and a watershed algorithm[J]. Biosystems Engineering, 2012, 113(1): 42-53.

Green peach recognition based on improved discriminative regional feature integration algorithm in similar background

Huang Xiaoyu, Li Guanglin※, Ma Chi, Yang Shihang

(400715,)

In order to solve the problems in the recognition of immature green fruits under natural illumination in machine vision recognition, such as the color similarity between the fruits and the background, uneven illumination and partial occlusion, etc., in this paper, color, texture and shape features of green peach were combined to identify immature green peach based on the DRFI (discriminative regional feature integration) algorithm. The color features included the mean ofcomponentminuscomponent, Hue component. The texture features were variances of LM(Leung-Malik) filter bank response and LBP(local binary pattern) feature, and the shape features included area, perimeter, circularity, major axis length, minor axis length, length-width ratio, major-axis length to perimeter ratio and eccentricity. The DRFI algorithm mainly had 3 steps, that was, the multi-level segmentation, saliency computation in each level and multi-level saliency fusion. Firstly, the input image was preprocessed based on the multi-level segmentation, which were generated in the graph-based image segmentation algorithm with different control parameters of standard deviation of kernel function of the Gaussian filter (sigma), the number of the merged region (), and the minimal pixels of segmented region (min). With the values of control parameters changing, different image segmentation results were obtained. In this paper, the input image was divided into 25 layers in the training set and each layer was further divided into several super-pixels. Secondly, the super-pixel in each layer had 26 feature variables, which included 2 color features, 16 textural features and 8 shape features. The segmentation results of each layer of the input image were matched with the ground truth map, then the tag of the super-pixel was produced, which was the positive one (the peach) or the negative one (the background). The 26 dimensional feature vector and tag of each super pixel were inputted into the random forest model, and the regression model was trained, and then the saliency map of each layer segmentation image was calculated by the model. Thirdly, the DRFI saliency map was obtained by a linear combiner to fuse the multi-level saliency map , whose weights was given through a least square estimator. To effectively detect the immature green peach in natural environment, the DRFI saliency map needed to be processed further. So adaptive segmentation threshold from the OTSU algorithm for DRFI saliency map must be adjusted to reduce the wrongly segmentation of the fruit with low probability in the saliency map. Mathematical morphology was then used, such as removing noise from the binary map. The watershed segmentation algorithm which combined the maker-controlled and distance transform was used to separate the fruit which still existed adhesion after segmentation. A total of 186 images were collected as the samples for experiment. 150 images were randomly selected as the training set, and the remaining 36 images were as the validation set. The experimental results of peach images recognition showed that the recognition accuracy of the proposed method in this paper in the training set was 91.7%, and the accuracy in the validation set reached 88.3%. At the same time, the recognition results of the proposed method outperformed the results from other methods, including Kurtulmus et al.(2014), Ma et al.(2016), and original DRFI algorithm(2017). Furthermore, the proposed algorithm could show a good performance in the complex scenes such as sunny side, shadow side, occlusion and overlap. The recognition results revealed that the proposed method could provide reference for early estimation of fruit yield and picking of green fruit automatically and intelligently.

machine vision; image processing; algorithms; peach; salient object detection; feature extraction; watershed transform; recognition

黃小玉,李光林,馬 馳,楊士航. 基于改進(jìn)判別區(qū)域特征融合算法的近色背景綠色桃子識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2018,34(23):142-148. doi:10.11975/j.issn.1002-6819.2018.23.017 http://www.tcsae.org

Huang Xiaoyu, Li Guanglin, Ma Chi, Yang Shihang. Green peach recognition based on improved discriminative regional feature integration algorithm in similar background[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2018, 34(23): 142-148. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2018.23.017 http://www.tcsae.org

2018-07-31

2018-10-26

重慶市科委重點(diǎn)項(xiàng)目(csk2016shmszx80018);重慶市研究生科研創(chuàng)新項(xiàng)目(CYS18109)

黃小玉,研究方向?yàn)閳D像處理。Email:1653370505@qq.com

李光林,教授,博士,博士生導(dǎo)師,研究方向?yàn)閭鞲衅髋c智能檢測(cè)。Email:liguanglin@swu.edu.cn

10.11975/j.issn.1002-6819.2018.23.017

S126

A

1002-6819(2018)-23-0142-07

猜你喜歡
識(shí)別率桃子像素
像素前線之“幻影”2000
桃子
“像素”仙人掌
基于真耳分析的助聽器配戴者言語可懂度指數(shù)與言語識(shí)別率的關(guān)系
桃子
聽力正常青年人的低通濾波言語測(cè)試研究*
提升高速公路MTC二次抓拍車牌識(shí)別率方案研究
檔案數(shù)字化過程中OCR技術(shù)的應(yīng)用分析
高像素不是全部
等我回來再騙你