張雄楚,陳兵旗,李景彬,梁習(xí)卉子,,姚慶旺,穆述豪,姚文廣
·農(nóng)業(yè)信息與電氣技術(shù)·
紅棗收獲機(jī)視覺(jué)導(dǎo)航路徑檢測(cè)
張雄楚1,陳兵旗1※,李景彬2,梁習(xí)卉子1,2,姚慶旺2,穆述豪1,姚文廣1
(1. 中國(guó)農(nóng)業(yè)大學(xué)工學(xué)院,北京 100083;2. 石河子大學(xué)機(jī)械電氣工程學(xué)院,石河子 832003)
針對(duì)新疆地區(qū)駿棗與灰棗棗園的收獲作業(yè),該研究提出一種紅棗收獲機(jī)棗樹行視覺(jué)導(dǎo)航路徑檢測(cè)算法。通過(guò)棗園圖像固定區(qū)域中B分量垂直累計(jì)直方圖的標(biāo)準(zhǔn)差與最小值的關(guān)系對(duì)棗園種類進(jìn)行自動(dòng)判斷。針對(duì)灰棗棗園,首先采用色差法與OTSU法對(duì)圖像進(jìn)行灰度化與二值化處理,然后進(jìn)行面積去噪與補(bǔ)洞處理,在處理區(qū)域內(nèi)從上向下逐行掃描,將每行像素上像素值為0的像素點(diǎn)坐標(biāo)平均值作為該行候補(bǔ)點(diǎn)的坐標(biāo),并將所有候補(bǔ)點(diǎn)坐標(biāo)的平均值作為Hough變換的已知點(diǎn)坐標(biāo),最后基于過(guò)已知點(diǎn)的Hough變換擬合導(dǎo)航路徑;針對(duì)駿棗棗園,在處理區(qū)域內(nèi)通過(guò)垂直累計(jì)R分量的方法確定掃描區(qū)間,然后在掃描區(qū)間內(nèi)從上到下逐行掃描,將每行像素上R分量值最小的像素點(diǎn)作為該行的候補(bǔ)點(diǎn),并將所有候補(bǔ)點(diǎn)的坐標(biāo)平均值作為Hough變換的已知點(diǎn),最后使用過(guò)已知點(diǎn)的Hough變換擬合導(dǎo)航路徑。試驗(yàn)結(jié)果表明:對(duì)于灰棗棗園與駿棗棗園,該算法的路徑檢測(cè)準(zhǔn)確率平均值分別為94%和93%,處理1幀圖像平均耗時(shí)分別為0.042和0.046 s,檢測(cè)準(zhǔn)確性與實(shí)時(shí)性滿足紅棗收獲機(jī)作業(yè)要求,能夠自動(dòng)判別棗園種類進(jìn)行作業(yè),可為實(shí)現(xiàn)紅棗收獲機(jī)自動(dòng)駕駛提供理論依據(jù)。
農(nóng)業(yè)機(jī)械;圖像處理;視覺(jué)導(dǎo)航;棗園;Hough變換
自動(dòng)駕駛技術(shù)在農(nóng)業(yè)方面的應(yīng)用是目前的重點(diǎn)研究?jī)?nèi)容,而視覺(jué)導(dǎo)航自動(dòng)駕駛技術(shù)因其采集信息豐富、靈活性好、抗干擾能力強(qiáng)以及成本低等優(yōu)勢(shì)成為自動(dòng)駕駛技術(shù)的重要組成部分[1-2]。?
導(dǎo)航路徑檢測(cè)是農(nóng)機(jī)設(shè)備自主導(dǎo)航的關(guān)鍵技術(shù)[3-6]。針對(duì)農(nóng)業(yè)的視覺(jué)導(dǎo)航路徑檢測(cè),王新忠等[7-15]以溫室、果園、田間除草、小麥?zhǔn)斋@、水稻收割以及棉花播種為對(duì)象進(jìn)行了導(dǎo)航路徑檢測(cè)的研究。謝忠華等[16-19]重點(diǎn)研究了嵌入式視覺(jué)導(dǎo)航、激光導(dǎo)航以及掃描濾波與顏色空間轉(zhuǎn)換在視覺(jué)導(dǎo)航中的應(yīng)用。Guy等[20-22]針對(duì)葡萄園、田間障礙物和田間作業(yè)的拖拉機(jī)提出相適應(yīng)的導(dǎo)航路徑檢測(cè)算法。Liu等[23]利用小波變換、線性分析和前后框架的相關(guān)理論提出了一種小麥播種機(jī)弱導(dǎo)航線檢測(cè)算法;梁習(xí)卉子等[24]基于加強(qiáng)關(guān)注G分量與跳行累計(jì)G分量的方式確定候補(bǔ)點(diǎn);李景彬等[25]通過(guò)顏色分量差方法與移動(dòng)平滑法對(duì)收獲時(shí)期的棉田圖像進(jìn)行處理,并基于最低波谷點(diǎn)向未收獲區(qū)方向?qū)ふ也ǚ迳仙R界點(diǎn)確定候補(bǔ)點(diǎn);張雄楚等[26]針對(duì)棉花鋪膜播種作業(yè)視覺(jué)導(dǎo)航路徑檢測(cè)易受光照強(qiáng)度、噪聲及劃線深度影響,設(shè)計(jì)了一種抗干擾能力強(qiáng)、適應(yīng)性廣的算法;李革等[27]改進(jìn)傳統(tǒng)的純追蹤算法,使其滿足曲線路徑跟蹤;何潔等[28]提出了一種將邊緣檢測(cè)和掃描濾波相結(jié)合的基準(zhǔn)線提取方法;孟慶寬等[29]研究了自然光照條件下基于機(jī)器視覺(jué)的農(nóng)業(yè)機(jī)械導(dǎo)航路徑識(shí)別技術(shù),降低自然光照影響。
上述研究都是針對(duì)低矮作物的行間導(dǎo)航路徑檢測(cè),新疆地區(qū)紅棗棗園收獲作業(yè)導(dǎo)航路徑檢測(cè)的相關(guān)報(bào)道文獻(xiàn)較少。紅棗產(chǎn)業(yè)在新疆地區(qū)社會(huì)經(jīng)濟(jì)中占有重要地位,主要品種有駿棗和灰棗。彭順正等[30]針對(duì)駿棗棗園作業(yè)環(huán)境,提出一種基于“行閾值分割”與“行間區(qū)域”方法的駿棗棗園導(dǎo)航基準(zhǔn)線檢測(cè)算法。該方法只適用于行間行走,且不適用于灰棗棗園。新疆石河子大學(xué)研發(fā)了適用于棗樹行上作業(yè)的紅棗收獲機(jī),是未來(lái)新疆紅棗收獲的主要機(jī)械之一,本研究基于該紅棗收獲機(jī)械提出一種針對(duì)駿棗與灰棗棗園棗樹行上作業(yè)并以樹冠中心線為導(dǎo)航路徑的視覺(jué)導(dǎo)航路徑檢測(cè)算法。
本研究在新疆阿拉爾十四兵團(tuán)采集駿棗和灰棗棗園的收獲作業(yè)視頻,其中駿棗圖像采集時(shí)間是2018年10月15日下午3:00-5:00,灰棗圖像采集時(shí)間是2019年10月20日下午3:00-5:00。具體操作如下:如圖1a所示,將采集攝像頭安裝在紅棗收獲機(jī)的駕駛室正前方,距離地面2.5 m,與地面水平夾角為=30°(保證能看全紅棗樹行幅寬);如圖1b所示,開啟作業(yè)模式,駕駛紅棗收獲機(jī)械進(jìn)行紅棗收獲作業(yè),車速約為2 km/h,同時(shí)采集作業(yè)視頻,圖像大小640×480(像素),幀率30幀/s,并存儲(chǔ)為AVI格式?;贛icrosoft Visual Studio 2010系統(tǒng),在北京現(xiàn)代富博科技有限公司的通用圖像處理系統(tǒng)ImageSys平臺(tái)上進(jìn)行算法開發(fā)。電腦的處理器為Intel(R) Core(TM) i5-4590 CPU、主頻為3.3 GHz、內(nèi)存為8 G。
1.紅棗收獲機(jī) 2.相機(jī) 3.地面 4.已收獲棗樹行 5.收獲棗樹行 6.待收獲棗樹行
1.Jujube harvester 2.Camera 3.Ground 4.Jujube tree line which has been harvested 5.Jujube tree line in harvesting 6.Jujube tree line to be harvested.
注:為相機(jī)光軸與地面的垂直距離,m;為相機(jī)光軸與地面的水平夾角,(°);箭頭表示作業(yè)方向。
Note:is vertical distance between the camera’s optical axis and the ground, m;is horizontal angle between the camera’s optical axis and the ground, (°); Arrow indicates the working direction.
圖1 相機(jī)安裝和紅棗收獲機(jī)作業(yè)示意圖
Fig.1 Schematic diagram of camera installation and jujube harvester operation
如圖2所示,圖像左上角為坐標(biāo)原點(diǎn),向右為軸正方向,向下為軸正方向。圖像顏色分量分別用R(紅色)、G(綠色)、B(藍(lán)色)表示,其值分別用、、表示。像素點(diǎn)的、坐標(biāo)分別對(duì)應(yīng)像素列和像素行。
圖2 灰棗棗園處理窗口及擬合導(dǎo)航路徑示意圖
灰棗樹干較高,枝干較多,收獲期樹葉殘余較多,導(dǎo)致圖像噪聲較多,像素分布規(guī)律不明顯,需要先對(duì)采集的圖像進(jìn)行面積去噪與補(bǔ)洞處理,再提取樹冠部分的候補(bǔ)點(diǎn)擬合導(dǎo)航路徑。駿棗樹干較矮,枝干較少,收獲期樹葉基本自然脫落,像素分布規(guī)律明顯,可直接通過(guò)擬合樹冠部分候補(bǔ)點(diǎn)提取導(dǎo)航路徑。
1.2.1 棗園作業(yè)模式的自動(dòng)判別
為了避免圖像上方田端以外區(qū)域以及下方近視野噪聲的干擾,將圖像軸方向中間1/3區(qū)域設(shè)為處理區(qū)域。在處理區(qū)域內(nèi)將B分量的垂直累計(jì)值存入數(shù)組中,求得的最小值和標(biāo)準(zhǔn)差,若/<5則為駿棗棗園作業(yè)模式,其余為灰棗棗園作業(yè)模式。通過(guò)對(duì)開始作業(yè)第1幀圖像的判斷確定棗園作業(yè)模式,后面都以該模式進(jìn)行導(dǎo)航線檢測(cè)。
1.2.2 灰棗棗園導(dǎo)航路徑檢測(cè)
1)圖像的灰度化與二值化處理
通過(guò)對(duì)灰棗棗園圖像的分析,左右行間區(qū)域像素與樹冠像素區(qū)別較大,且RGB像素值的關(guān)系是>>,采用色差法(公式:|2--|)對(duì)圖像進(jìn)行灰度化處理,然后使用OTSU法對(duì)灰度圖像進(jìn)行二值化處理。
2)面積去噪處理
使用ImageSys平臺(tái)中的Noise_remover函數(shù),以黑色像素為對(duì)象,去除像素個(gè)數(shù)小于50的黑色像素連通區(qū)域。
3)補(bǔ)洞處理
使用ImageSys平臺(tái)中的Holl_filling函數(shù),分別以黑色像素和白色像素為對(duì)象進(jìn)行2次8鄰域補(bǔ)洞處理。
4)提取候補(bǔ)點(diǎn)群
如圖2的中間虛線框所示,以圖像軸方向中間1/3區(qū)域作為處理窗口。在處理窗口內(nèi)從上到下逐行掃描,將每行像素上像素值為0的像素點(diǎn)坐標(biāo)平均值作為該行候補(bǔ)點(diǎn)的坐標(biāo),如圖2中的候補(bǔ)點(diǎn)群。
5)已知點(diǎn)的確定
將所有候補(bǔ)點(diǎn)坐標(biāo)的平均值作為Hough變換已知點(diǎn)的坐標(biāo)。
6)擬合導(dǎo)航路徑
基于上述步驟4)中候補(bǔ)點(diǎn)群與步驟5)中已知點(diǎn),使用過(guò)已知點(diǎn)的Hough變換[24]擬合導(dǎo)航路徑,如圖2中實(shí)線所示。
1.2.3 駿棗棗園導(dǎo)航路徑檢測(cè)
1)處理窗口的確定
根據(jù)相機(jī)的安裝位置、距地面高度以及與地面水平夾角等因素,本研究以圖像軸方向中間1/3區(qū)域?yàn)樘幚泶翱凇?/p>
2)棗園田端的判斷
設(shè)棗園田端位置為,在處理窗口內(nèi)從上到下逐行掃描像素,并計(jì)算其R分量累計(jì)值,存入數(shù)組中。完成掃描后,計(jì)算數(shù)組的平均值和標(biāo)準(zhǔn)差1。最后,從下向上判斷數(shù)組的數(shù)據(jù)是否小于(-3×1)的值,若連續(xù)10個(gè)數(shù)據(jù)滿足條件,則設(shè)定第1次出現(xiàn)滿足條件數(shù)據(jù)代表的掃描行為田端,否則=0,即不存在田端。
3)掃描區(qū)間的確定
掃描標(biāo)準(zhǔn)線位置為第列像素,以掃描標(biāo)準(zhǔn)線為中心左右各擴(kuò)展(本文=30)個(gè)像素作為掃描區(qū)間。確定值方法如下:在處理窗口內(nèi)從左向右逐列掃描,將第列像素的R分量累計(jì)值存儲(chǔ)在數(shù)組[],數(shù)組中最小值對(duì)應(yīng)的值為。
4)提取候補(bǔ)點(diǎn)群
每行像素提取1個(gè)候補(bǔ)點(diǎn),方法如下:在掃描區(qū)間內(nèi),從下向上逐行掃描至田端。將每行R分量最小值的像素點(diǎn)位置作為候補(bǔ)點(diǎn)。
5)已知點(diǎn)的確定
將所有候補(bǔ)點(diǎn)坐標(biāo)的平均值作為Hough變換已知點(diǎn)的坐標(biāo)。
6)擬合導(dǎo)航路徑
基于上述步驟4)中候補(bǔ)點(diǎn)群與步驟5)中已知點(diǎn),使用過(guò)已知點(diǎn)的Hough變換[24]擬合導(dǎo)航路徑。
具體檢測(cè)流程圖如圖3所示。
圖3 檢測(cè)方法流程圖
本研究錄制了灰棗棗園和駿棗棗園的導(dǎo)航視頻各5組進(jìn)行算法的開發(fā)和測(cè)試,完成算法開發(fā)以后,分別在2種棗園進(jìn)行實(shí)際導(dǎo)航試驗(yàn)。選用灰棗棗園和駿棗棗園各3組視頻進(jìn)行測(cè)試。其中,1組為順光工況,2組為逆光工況,每組視頻的幀數(shù)及棗園種類判別試驗(yàn)結(jié)果見表1。
表1 棗園種類判別試驗(yàn)結(jié)果
圖4為灰棗棗園和駿棗棗園圖像與模式判別處理區(qū)域內(nèi)B分量累計(jì)直方圖。從圖中可以看出,駿棗棗園的B分量累計(jì)直方圖存在明顯波谷且波動(dòng)較大,而灰棗棗園作業(yè)圖像中波谷不明顯,所以將B分量最小值與標(biāo)準(zhǔn)差作為判別標(biāo)準(zhǔn)。其中灰棗棗園圖像的=18 427,=1 890.777,/=9.75;駿棗棗園圖像的=7 338,=1 979.634,/=3.71。根據(jù)視頻圖像統(tǒng)計(jì)結(jié)果,初步設(shè)定以/閾值為5進(jìn)行棗園種類判別試驗(yàn)。從表1中可以看出,灰棗與駿棗棗園類別的判別準(zhǔn)確率皆為100%。所以確定/<5.0為駿棗棗園,否則為灰棗棗園。
圖4 棗園類別的判斷區(qū)域及判斷區(qū)域的B分量垂直累計(jì)直方圖
圖5為不同工況下灰棗棗園導(dǎo)航路徑檢測(cè)過(guò)程。
在圖5a與圖5b中,對(duì)比二值圖像與去噪可以看出,二值化處理后的圖像中左右行間區(qū)域白色像素中存在黑色像素噪聲,經(jīng)過(guò)去噪處理后能夠有效去除噪聲;對(duì)比去噪處理與補(bǔ)黑色洞處理可以看出,經(jīng)過(guò)黑色像素補(bǔ)洞處理后,樹冠區(qū)域黑色像素中的“洞”(白色像素點(diǎn))基本被“補(bǔ)上”(去除白色像素點(diǎn)),行間區(qū)域與樹冠界限較為明顯;對(duì)比補(bǔ)黑色洞處理與補(bǔ)白色洞處理可以看出,經(jīng)過(guò)白色像素補(bǔ)洞處理后,行間區(qū)域白色像素中的黑色像素形成的“洞”被完全被“補(bǔ)上”,使圖像的分區(qū)更為明顯,便于提取候補(bǔ)點(diǎn)(候補(bǔ)點(diǎn)群為樹冠區(qū)域中的白色像素點(diǎn)),擬合的導(dǎo)航路徑精度較高。
在圖5c中,原圖中矩形框區(qū)域的灰棗幾乎脫落且枝干稀疏,經(jīng)過(guò)二值化、去噪、補(bǔ)黑色洞和補(bǔ)白色洞處理后,行間區(qū)域與樹冠區(qū)域有黏連部分,使得該區(qū)域候補(bǔ)點(diǎn)的精度降低,最后導(dǎo)致檢測(cè)出的導(dǎo)航路徑與目測(cè)有一些偏差。
在圖5d中,缺株工況使得行間區(qū)域與樹冠區(qū)域黏連情況嚴(yán)重,圖像分區(qū)錯(cuò)誤,導(dǎo)致檢測(cè)出的導(dǎo)航路徑誤差較大。
圖5 不同工況下灰棗棗園導(dǎo)航路徑檢測(cè)結(jié)果
使用采集到的作業(yè)視頻對(duì)算法進(jìn)行驗(yàn)證,實(shí)際檢測(cè)路徑與人工觀測(cè)路徑的夾角約大于5°即判為檢測(cè)錯(cuò)誤,結(jié)果如表2所示。
表2 導(dǎo)航路徑檢測(cè)試驗(yàn)結(jié)果
從表2中可以看出:灰棗棗園的3個(gè)視頻的準(zhǔn)確率分別為92%、90%和98%,平均準(zhǔn)確率為93%,平均處理速度為0.042 s/幀,該算法能夠滿足灰棗實(shí)際收獲作業(yè)的需要,其檢測(cè)的導(dǎo)航路徑可以作為灰棗棗園收獲作業(yè)視覺(jué)導(dǎo)航自動(dòng)駕駛的導(dǎo)航路徑。產(chǎn)生誤檢的主要原因是枝干稀疏與缺株的情況使得行間區(qū)域與樹冠區(qū)域出現(xiàn)像素黏連,提取的候補(bǔ)點(diǎn)精度低,最終導(dǎo)致誤檢。
圖6是不同工況下駿棗棗園導(dǎo)航路徑檢測(cè)結(jié)果。從圖6a中可以看出,在順光、車身抖動(dòng)嚴(yán)重的情況下,目標(biāo)棗行不在處理區(qū)域內(nèi),導(dǎo)航路徑檢測(cè)失敗。對(duì)比圖6c、6d可知,人像的干擾會(huì)導(dǎo)致導(dǎo)航路徑誤檢。從圖6b中可以看出,針對(duì)逆光、缺株、地膜干擾工況,由于地膜逆光導(dǎo)致棗園田端誤檢,而導(dǎo)航路徑檢測(cè)準(zhǔn)確,這說(shuō)明使用R分量最小值能夠準(zhǔn)確、穩(wěn)定地提取候補(bǔ)點(diǎn)。
圖7a是順光、人像干擾工況掃描區(qū)間圖人像干擾所在行像素R分量折線圖,第240列以前存在2個(gè)波谷,分別為第197列與第223列,通過(guò)對(duì)比可知,前者應(yīng)為檢測(cè)的候補(bǔ)點(diǎn)位置且精度較高,但是由于第240列人像的干擾,使得第240列成為候補(bǔ)點(diǎn)位置,誤檢該行像素的候補(bǔ)點(diǎn)位置,最終導(dǎo)致導(dǎo)航路徑檢測(cè)失敗。
圖6 不同工況下駿棗棗園導(dǎo)航路徑檢測(cè)結(jié)果
圖7b是逆光、地膜干擾工況處理區(qū)域R分量水平累計(jì)折線圖,由于第220行為地膜集中區(qū)域,使得該位置出現(xiàn)數(shù)據(jù)大幅下降的波谷,導(dǎo)致田端判別方法誤檢該位置為田端,即地膜干擾致使田端檢測(cè)錯(cuò)誤。圖7c是順光、陰影工況處理區(qū)域隨機(jī)像素行R分量折線圖,第240列為樹冠中心區(qū)域與該位置出現(xiàn)波谷相契合,即以R分量最小值提取的候補(bǔ)點(diǎn)位置準(zhǔn)確。為驗(yàn)證R分量最小值作為候補(bǔ)點(diǎn)提取特征的穩(wěn)定性,進(jìn)行如下試驗(yàn):分別從采集的3個(gè)駿棗棗園視頻中隨機(jī)截取連續(xù)100幀圖像,人工觀察每幀圖像中第40行、第130行、第230行(分別對(duì)應(yīng)圖像遠(yuǎn)視端、圖像中間以及圖像近視端隨機(jī)像素行)像素候補(bǔ)點(diǎn)提取是否正確。試驗(yàn)結(jié)果如表3所示,3個(gè)視頻中的近視端檢測(cè)準(zhǔn)確率分別為93%、94%、92%,略低于圖像遠(yuǎn)視端與圖像中間的檢測(cè)準(zhǔn)確率,這主要是因?yàn)榻暥说臉涔谙袼胤植济芏容^低,噪聲多,干擾嚴(yán)重。出現(xiàn)誤檢的原因主要是人像干擾與車身抖動(dòng)。
表3 候補(bǔ)點(diǎn)提取試驗(yàn)結(jié)果
綜上,使用R分量最小值提取的候補(bǔ)點(diǎn)比較穩(wěn)定,適用于候補(bǔ)點(diǎn)的提取。
圖7 人像干擾、田端誤檢以及候補(bǔ)點(diǎn)特征分析的R分量折線圖
使用采集到的作業(yè)視頻對(duì)算法進(jìn)行驗(yàn)證,實(shí)際檢測(cè)路徑與人工觀測(cè)路徑的夾角約大于5°即判為檢測(cè)錯(cuò)誤,結(jié)果如表4所示。
從表4中可以看出:針對(duì)駿棗棗園,3個(gè)視頻的準(zhǔn)確率分別為93%、95%和90%,平均準(zhǔn)確率為92%,平均處理速度為0.046 s/幀,該算法能夠滿足駿棗棗園實(shí)際收獲作業(yè)的需要,其檢測(cè)的導(dǎo)航路徑可以作為收獲作業(yè)的視覺(jué)導(dǎo)航自動(dòng)駕駛的導(dǎo)航路徑,田端檢測(cè)準(zhǔn)確。導(dǎo)航路徑出現(xiàn)誤檢的主要原因是車身抖動(dòng)和人像的干擾導(dǎo)致掃描區(qū)間瞬移與人像所在行像素的候補(bǔ)點(diǎn)提取精度低,最終擬合的導(dǎo)航路徑精度不滿足要求,地膜的干擾是造成田端誤檢的主要原因。
表4 導(dǎo)航路徑檢測(cè)試驗(yàn)結(jié)果
本文針對(duì)收獲時(shí)期灰棗棗園與駿棗棗園棗樹行上作業(yè)圖像,研究了視覺(jué)導(dǎo)航路徑檢測(cè)算法。該算法能夠自動(dòng)判別棗園種類,確定作業(yè)模式,同時(shí)能夠判別駿棗棗園田端:
1)在開始作業(yè)的第1幀圖像軸方向中間1/3區(qū)域垂直累計(jì)B分量,然后基于最小值與標(biāo)準(zhǔn)差的關(guān)系確定棗園類型,使得系統(tǒng)能夠自動(dòng)選擇作業(yè)模式,滿足實(shí)際作業(yè)要求。
2)針對(duì)灰棗棗園圖像,首先進(jìn)行灰度化、二值化處理后,此時(shí)圖像分區(qū)不明顯,然后進(jìn)行面積去噪與補(bǔ)洞處理,使得樹冠與行間區(qū)域部分界限明顯。在處理區(qū)域內(nèi),從上到下逐行掃描,以每行黑色像素坐標(biāo)平均值作為該行候補(bǔ)點(diǎn)坐標(biāo),再以所有候補(bǔ)點(diǎn)坐標(biāo)平均值作為Hough變換已知點(diǎn),最后使用過(guò)用過(guò)已知點(diǎn)的Hough變換擬合導(dǎo)航路徑。
3)針對(duì)駿棗棗園,在處理區(qū)域內(nèi),以行像素為單位累計(jì)R分量值,再根據(jù)數(shù)據(jù)的平均值和標(biāo)準(zhǔn)差確定棗園田端的位置。從下到上至田端位置,以R分量最小值提取每行像素中的候補(bǔ)點(diǎn),并以所有候補(bǔ)點(diǎn)坐標(biāo)的平均值作為Hough、變換已知點(diǎn),最后使用過(guò)已知點(diǎn)的Hough變換擬合導(dǎo)航路徑。
4)使用采集的多工況灰棗棗園與駿棗棗園圖像進(jìn)行試驗(yàn),試驗(yàn)的結(jié)果表明,灰棗棗園的檢測(cè)準(zhǔn)確率平均值為93%,平均處理速度為0.042 s/幀,駿棗棗園的檢測(cè)準(zhǔn)確率平均值為92%,平均處理速度為0.046 s/幀。該算法能夠適用于2種棗園收獲作業(yè),提取的導(dǎo)航路徑精度與算法的實(shí)時(shí)性滿足實(shí)際作業(yè)的要求,能夠準(zhǔn)確識(shí)別紅棗種類和駿棗棗園的田端,為實(shí)現(xiàn)紅棗收獲視覺(jué)導(dǎo)航自動(dòng)駕駛提供的理論依據(jù)。
[1] Han Shufeng, He Yong, Fang Hui. Recent development in automatic guidance and autonomous vehicle for agriculture: A Review[J]. Journal of Zhejiang University: Agric. & Life Sci, 2018, 44(4): 381-391, 515.
韓樹豐,何勇,方慧. 農(nóng)機(jī)自動(dòng)導(dǎo)航及無(wú)人駕駛車輛的發(fā)展綜述[J]. 浙江大學(xué)學(xué)報(bào):農(nóng)業(yè)與生命科學(xué)版,2018,44(4):381-391,515. (in English with Chinese abstract)
[2] 姬長(zhǎng)英,周俊. 農(nóng)業(yè)機(jī)械導(dǎo)航技術(shù)發(fā)展分析[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(9):44-54.
Ji Changying, Zhou Jun. Current situation of navigation technologies for agricultural machinery [J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(9): 44-54. (in Chinese with English abstract)
[3] 劉陽(yáng),高國(guó)琴. 農(nóng)業(yè)機(jī)械視覺(jué)導(dǎo)航基準(zhǔn)線識(shí)別研究進(jìn)展[J]. 農(nóng)機(jī)化研究,2015,37(5):7-13.
Liu Yang, Gao Guoqin. Research development of vision- based guidance directrix recognition for agriculture vehicles[J]. Journal of Agricultural Mechanization Research, 2015, 37(5): 7-13. (in Chinese with English abstract)
[4] 陳兵旗. 農(nóng)田作業(yè)視覺(jué)導(dǎo)航系統(tǒng)研究[J]. 科技導(dǎo)報(bào),2018,36(11):66-81.
Chen Bingqi. Study on vision navigation for field work[J]. 2018, 36(11): 66-81. (in Chinese with English abstract)
[5] 劉陽(yáng). 自然環(huán)境下目標(biāo)物的高速圖像檢測(cè)算法研究[D]. 北京:中國(guó)農(nóng)業(yè)大學(xué),2014.
Liu Yang. Research on Objects Image High-speed Detection Algorithmunder Natural Environments[D]. Beijing: China Agricultural University, 2014. (in Chinese with English abstract)
[6] 翟志強(qiáng),朱忠祥,杜岳峰,等. 基于虛擬現(xiàn)實(shí)的拖拉機(jī)雙目視覺(jué)導(dǎo)航試驗(yàn)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(23):56-65.
Zhai Zhiqiang, Zhu Zhongxiang, Du Yuefeng, et al. Test of binocular vision-based guidance for tractor based on virtual reality[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2017, 33(23): 56-65. (in Chinese with English abstract)
[7] 王新忠,韓旭,毛罕平,等. 基于最小二乘法的溫室番茄壟間視覺(jué)導(dǎo)航路徑檢[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2012,43(6):161-166.
Wang Xinzhong, Han Xu, Mao Hanping, et al. Navigation line detection of tomato ridges in greenhouse based on least square method. Transactions of the Chinese Society for Agricultural Machinery, 2012, 43(6): 161-166. (in Chinese with English abstract)
[8] 高國(guó)琴,李明. 基于K-means算法的溫室移動(dòng)機(jī)器人導(dǎo)航路徑識(shí)別[J]. 農(nóng)業(yè)工程學(xué)報(bào),2014,30(7):25-33.
Gao Guoqin, Li Ming. Navigating path recognition for greenhouse mobile robot based on K-means algorithm[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2014, 30(7): 25-33. (in Chinese with English abstract)
[9] 馮娟,劉剛,司永勝,等. 果園視覺(jué)導(dǎo)航基準(zhǔn)線生成算法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2012,43(7):185-189,184.
Feng Juan, Liu Gang, Si Yongsheng, et al. Algorithm based on image processing technology to generate navigation directrix inorchard[J]. Transactions of the Chinese Society for Agricultural Machinery, 2012, 43(7): 185-189, 184. (in Chinese with English abstract)
[10] Meng Qingkuan, Qiu Ruicheng, He Jie, et al. Development of agricultural implement system based on machine vision and fuzzy control[J]. Computers and Electronics in Agriculture, 2015, 112: 128-138.
[11] 崔維,丁玲. 基于視覺(jué)導(dǎo)航和RBF的移動(dòng)采摘機(jī)器人路徑規(guī)劃研究[J]. 農(nóng)機(jī)化研究,2016,38(11):234-238.
Cui Wei, Ding Ling. Research on path planning for mobile picking robot based on visual navigation and RBF[J]. Journal of Agricultural Mechanization Research, 2016, 38(11): 234-238. (in Chinese with English abstract)
[12] 李景彬,陳兵旗,劉陽(yáng). 棉花鋪膜播種機(jī)導(dǎo)航路線圖像檢測(cè)方[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(1):40-45.
Li Jingbin, Chen Bingqi, Liu Yang. Image detection method of navigation route of cotton plastic film mulch planter[J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(1): 40-45. (in Chinese with English abstract)
[13] 郭翰林,洪瑛杰,張翔,等. 再生稻收割機(jī)的視覺(jué)導(dǎo)航路徑檢測(cè)方法[J]. 福建農(nóng)林大學(xué)學(xué)報(bào):自然科學(xué)版,2017,46(3):356-360.
Guo Hanlin, Hong Yingjie, Zhang Xiang, et al. Method of identifying the vision navigation path for ratooning rice harvester[J]. Journal of Fujian Agriculture and Forestry University: Natural Science Edition, 2017, 46(3): 356-360. (in Chinese with English abstract)
[14] 趙騰,野口伸,楊亮亮,等. 基于視覺(jué)識(shí)別的小麥?zhǔn)斋@作業(yè)線快速獲取方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2016,47(11):32-37.
Zhao Teng, Noboru Noguchi, Yang Liangliang, et al. Fast edge detection method for wheat field based on visual recognition[J]. Transactions of the Chinese Society for Agricultural Machinery, 2016, 47(11): 32-37. (in Chinese with English abstract)
[15] Zhang Qin, Chen Shaojie, et al. A visual navigation algorithm for paddy field weeding robot based on image understanding[J]. Computers and Electronics in Agriculture, 2017, 143: 66-78.
[16] 謝忠華. 水稻插秧機(jī)路徑追蹤設(shè)計(jì)—基于SOPC嵌入式視覺(jué)導(dǎo)航[J]. 農(nóng)機(jī)化研究,2017,39(10):213-217.
Xie Zhonghua. Path tracking design of rice transplanter[J]. Journal of Agricultural Mechanization Research, 2017, 39(10): 213-217. (in Chinese with English abstract)
[17] Pawin T, Tofael A, Tomohiro T. Navigation of autonomous tractor for orchards and plantations using a laser range finde: Automatic control of trailer position with tractor[J]. Biosystems Engineering, 2016, 147: 90-103.
[18] 李茗萱,張漫,孟慶寬,等. 基于掃描濾波的農(nóng)機(jī)具視覺(jué)導(dǎo)航基準(zhǔn)線快速檢測(cè)方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2013,29(1):41-47.
Li Mingxuan, Zhang Man, Meng Qingkuan, et al. Rapid detection of navigation baseline for farm machinery based on scan-filter algorithm[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2013, 29(1): 41-47. (in Chinese with English abstract)
[19] Vladimir C, Jarmo A, Vladimir B. Integer-based accurate conversion between RGB and HSV color spac[J]. Computers and Electrical Engineering, 2003, 19(1): 328-337.
[20] Guy Z, Amir S. A novel data fusion algorithm for low-cost localisation and navigation of autonomous vineyard sprayer robots[J]. Biosystems Engineering, 2016, 146: 133-148.
[21] David B, Ben U, Gordon W, et al. Vision‐based obstacle detection and navigation for an agricultural Robot[J]. Journal of Field Robotics, 2016, 33(8): 1107-1130.
[22] Hamed R, Hassan Z D, Hassan M, et al. A new DSWTS algorithm for real-time pedestrian detection in autonomous agricultural tractors as a computer vision system[J]. Measurement, 2016, 93: 126-134.
[23] Liu Yang, Chen Bingqi. Detection for weak navigation line for wheat planter based on machine vision[J]. Applied Mechanics & Materials, 2012, 246-247: 235-240.
[24] 梁習(xí)卉子,陳兵旗,姜秋慧,等. 基于圖像處理的玉米收割機(jī)導(dǎo)航路線檢測(cè)方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2016,32(22): 43-49.
Liangxi Huizi, Chen Bingqi, Jiang Qiuhui, et al. Detection method of navigation route of corn harvester base-d on image processing[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2016, 32(22): 43-49. (in Chinese with English abstract)
[25] 李景彬,陳兵旗,劉陽(yáng),等. 采棉機(jī)視覺(jué)導(dǎo)航路線圖像檢測(cè)方法[J]. 農(nóng)業(yè)工程學(xué)報(bào),2013,29(11):11-19.
Li Jingbin, Chen Bingqi, Liu Yang, et al. Detection for navigation route for cotton harvester based on machine vision[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2013, 29(11): 11-19. (in Chinese with English abstract)
[26] 張雄楚,李景彬,姚慶旺,等. 棉花鋪膜播種作業(yè)拖拉機(jī)的視覺(jué)導(dǎo)航路徑檢測(cè)[J]. 農(nóng)機(jī)化研究,2020,42(5):33-39.
Zhang Xiongchu, Li Jingbin, Yao Qingwang, et al. Research on visual navigation path detection algorithms of tractor for cotton film-spreading and seeding operation[J]. Journal of Agricultural Mechanization Research, 2020, 42(5): 33-39. (in Chinese with English abstract)
[27] 李革,王宇,郭劉粉,等. 插秧機(jī)導(dǎo)航路徑跟蹤改進(jìn)純追蹤算法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2018,49(5):21-26.
Li Ge, Wang Yu, Guo Liufen, et al. Improved pure pursuit algorithm for rice transplanter path tracking[J]. Transactions of the Chinese Society for Agricultural Machinery, 2018, 49(5): 21-26. (in Chinese with English abstract)
[28] 何潔,孟慶寬,張漫,等. 基于邊緣檢測(cè)與掃描濾波的農(nóng)機(jī)導(dǎo)航基準(zhǔn)線提取方法[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2014,45(S1):265-270.
He Jie, Meng Qingkuan, Zhang Man, et al. Crop baseline extraction method for off-road vehicle based on boundary detection and scan-filter[J]. Transactions of the Chinese Society for Agricultural Machinery, 2014, 45(S1): 265-270. (in Chinese with English abstract)
[29] 孟慶寬,張漫,楊耿煌,等. 自然光照下基于粒子群算法的農(nóng)業(yè)機(jī)械導(dǎo)航路徑識(shí)別[J]. 農(nóng)業(yè)機(jī)械學(xué)報(bào),2016,47(6):11-20.
Meng Qingkuan, Zhang Man, Yang Genghuang, et al. Guidance line recognition of agricultural machinery based on particle swarm optimization under natural illumination[J]. Transactions of the Chinese Society for Agricultural Machinery, 2016, 47(6): 11-20. (in Chinese with English abstract)
[30] 彭順正,坎雜,李景彬. 矮化密植棗園收獲作業(yè)視覺(jué)導(dǎo)航路徑提取. 農(nóng)業(yè)工程學(xué)報(bào),2017,33(9):45-52.
Peng Shunzheng, Kan Za, Li Jingbin. Extraction of visual navigation directrix for harvesting operation in short-stalked andclose-planting jujube orchard[J]. Transactions of the Chinese Society of Agricultural Engineering(Transactions of the CSAE), 2017, 33(9): 45-52. (in Chinese with English abstract)
Path detection of visual navigation for jujube harvesters
Zhang Xiongchu1, Chen Bingqi1※, Li Jingbin2, Liangxi Huizi1,2, Yao Qingwang2, Mu Shuhao1, Yao Wenguang1
(1.,100083,;2.,832003,)
The jujube industry occupies an important position in the social economy of Xinjiang. It is important to realize the automatic driving of the jujube harvester. This study proposes a visual navigation path detection algorithm for the jujube harvester which working above the jujube trees based on image processing, aiming at the harvest operation of the Jun-jujube and Hui-jujube orchards in Xinjiang. First, the variety of the jujube orchard was distinguished. Set the middle 1/3 area in the-axis direction of the image as the processing area, according to the relationship between the standard deviationand the minimum valueof the B-component vertical cumulative histogram of the processing area of the image, the jujube orchard variety was automatically determined. If the value of/was less than 5, it was the Jun-jujube orchard, and the rest was the Hui-jujube orchard. Secondly, navigation path was extracted based on the results of jujube orchard classification. For the Hui-jujube orchard, the cromatic aberration method and the OTSU method weare first used to transform the image into gray and binary, and then to denoise and fill the pixel hole that the black pixels inside white pixels or the white pixels inside black pixels in the binary image. Then, the pixel rows were scanned from the top to the bottom in the processing area, and then the coordinates average value of the pixel points with pixel value of 0 were taken as the candidate points on each pixel row, and the average value of all candidate points’ coordinates was used as the known point coordinates of Hough transform. Finally, the navigation path was fitted based on the Hough transform through the known points. For the Jun-jujube orchard, set the middle 1/3 of the-axis direction of the image as the processing area. The scan interval was determined by vertically accumulating the R-component in the processing area. Then, in the processing area, the scanning interval was determined by accumulating R-component vertically, and then scanned line by line from top to bottom in the scanning area,,the pixel with the smallest R-component value in each row of pixels was taken as the candidate point of the line, and the average coordinate value of all candidate points was taken as the known point of Hough transform. Finally, the Hough transform of known points was used to fit the navigation path The test results showed that for the Hui-jujube orchard and the Jun-jujube orchard, the average path detection accuracy of the algorithm was 94% and 93%, and the average processing time of one frame image was 0.042 and 0.046 s respectively. The detection accuracy and real-time performance can meet the requirements of jujube harvester operation, and can automatically identify the types of jujube orchard for operation, which can provide theoretical basis for the realization of automatic driving of jujube harvester.
agricultural machinery; image processing; vision navigation; jujube orchard; Hough transform
張雄楚,陳兵旗,李景彬,等. 紅棗收獲機(jī)視覺(jué)導(dǎo)航路徑檢測(cè)[J]. 農(nóng)業(yè)工程學(xué)報(bào),2020,36(13):133-140.doi:10.11975/j.issn.1002-6819.2020.13.016 http://www.tcsae.org
Zhang Xiongchu, Chen Bingqi, Li Jingbin, et al. Path detection of visual navigation for jujube harvesters[J]. Transactions of the Chinese Society of Agricultural Engineering (Transactions of the CSAE), 2020, 36(13): 133-140. (in Chinese with English abstract) doi:10.11975/j.issn.1002-6819.2020.13.016 http://www.tcsae.org
2019-01-12
2020-06-10
國(guó)家重點(diǎn)研發(fā)計(jì)劃課題(2016YFD07011504);兵團(tuán)中青年科技創(chuàng)新領(lǐng)軍人才(2016BC001)
張雄楚,博士生,研究方向:圖像處理與機(jī)器視覺(jué)方面。Email:781661571@qq.com
陳兵旗,教授,博士生導(dǎo)師,博士,主要從事圖像處理與機(jī)器視覺(jué)方面研究。Email:fbcbq@163.com
10.11975/j.issn.1002-6819.2020.13.016
TP242.6
A
1002-6819(2020)-13-0133-08