李智勇
(集美大學(xué)理學(xué)院,福建廈門 361021)
解最小一乘問題的遞歸神經(jīng)網(wǎng)絡(luò)
李智勇
(集美大學(xué)理學(xué)院,福建廈門 361021)
應(yīng)用鞍點(diǎn)理論和投影算子的性質(zhì),給出了一種遞歸神經(jīng)網(wǎng)絡(luò)求解具有線性約束的最小一乘問題,證明了此神經(jīng)網(wǎng)絡(luò)全局收斂于一個(gè)最優(yōu)解.數(shù)值實(shí)驗(yàn)表明,用本文的方法求解最小一乘問題是切實(shí)可行的.
遞歸神經(jīng)網(wǎng)絡(luò);最小一乘問題;線性約束
本文主要討論具有線性約束的最小一乘問題:
在線性回歸模型中,經(jīng)常用最小二乘估計(jì)來估計(jì)參數(shù)的值.但是,當(dāng)個(gè)別異常點(diǎn)有較大偏離時(shí),其誤差的平方比其誤差的絕對(duì)值要大得多.所以最小二乘估計(jì)的魯棒性不如最小一乘估計(jì).因此最小一乘估計(jì)被廣泛地應(yīng)用到線性回歸和工程領(lǐng)域,尤其是信號(hào)和圖像處理領(lǐng)域[1-4].但是,由于最小一乘問題的目標(biāo)函數(shù)不是光滑的,所以求解最小一乘問題是比較復(fù)雜的.因此研究求解它的算法是有必要和有意義的.有時(shí)人們需要實(shí)時(shí)求解最小一乘問題,但是經(jīng)典的數(shù)值算法如下降算法[5]、線性規(guī)劃方法[6]都很難做到這一點(diǎn).由于神經(jīng)網(wǎng)絡(luò)計(jì)算具有并行計(jì)算和實(shí)時(shí)求解的特點(diǎn),因此,文獻(xiàn)[3-4]、[7-9]提出用神經(jīng)網(wǎng)絡(luò)的方法來求解,它們的主要內(nèi)容都是用遞歸神經(jīng)網(wǎng)絡(luò)的方法求解一些具有線性約束的最小一乘問題,都是問題(1)的特殊情況,所以問題(1)具有一般性.文獻(xiàn)[3]、[7]不能求解問題(1).必須對(duì)問題(1)進(jìn)行轉(zhuǎn)化后,文獻(xiàn)[4]、[8-9]才能求解問題(1),但是,這將導(dǎo)致問題的規(guī)模變大,計(jì)算效率降低.本文應(yīng)用與文獻(xiàn)[9]類似的技巧,應(yīng)用鞍點(diǎn)理論和投影算子的性質(zhì),給出了一種具有全局收斂的求解問題(1)的遞歸神經(jīng)網(wǎng)絡(luò).
例1與文獻(xiàn)[9]的例1類似.因?yàn)樵贒的各列的和中,第四列的和最大,所以在理論上,可以得出此例的最優(yōu)解為[0 0 0 1 0 0 0 0 0]T.取λ=10,取初始點(diǎn)x(0)=[1 2 3 4 5 6 7 8 9]T,用MATLAB 7.0求解網(wǎng)絡(luò)模型(6)可得解為[0.0000 0 0 1.0000 0.0000 0 0 0.0000 0.0000]T.圖1顯示了神經(jīng)網(wǎng)絡(luò)(6)收斂于最優(yōu)解的軌線狀態(tài).由此可見,用本文所提的方法求解問題(1)是可行的.
圖1例題中基于神經(jīng)網(wǎng)絡(luò)(6)的x(t)的軌線狀態(tài)Fig.1 Convergence behavior of the state trajec tory x(t)based on the neuralnetw ork(6)in exam p le
[1]KUO SS,MAMMONE R J.Image restoration by convex projections using adaptive constraints and theL1norm[J]. IEEE Trans Signal Processing,1992,40(1):159-168.
[2]占美全,鄧志良.基于L1范數(shù)的總變分正則化超分辨率圖像重建[J].科學(xué)技術(shù)與工程,2010,10(28):6903-6906.
[3]XIA Y S,KAMEL M S.Novel cooperative neural fusion algorithms for image restoration and image fusion[J].IEEE Trans Image Process,2007,16(2):367-381.
[4]XIA Y S,SUN C Y,ZHENGW X.Discrete-time neural network for fast solving large linearL1estimation problems and its application to image restoration[J].IEEE Transactions on Neural Networks and Learning Systems,2012,23(5):812-820.
[5]BARTELSR H,CONN A R,SINCLAIR JW.Minimization techniques for piecewise differentiable functions:theL1solution to an overdetermined linear system[J].SIAM JNumer Anal,1978,15(2):224-241.
[6]RUZINSKY SA,OLSEN E T.L1andL∞minimization via a variant of Karmarkar's algorithm[J].Acoustics,Speech and Signal Processing,IEEE Transactions on,1989,37(2):245-253.
[7]XIA Y S,KAMEL M S.Cooperative recurrent neural networks for the constrainedL1norm estimator[J].IEEE Trans Signal Process,2007,55(7):3192-3205.
[8]XIA Y S,KAMEl M S.A cooperative recurrent neural network for solvingL1estimation problemswith linear constraints[J].Neural Comput,2008,20(3):844-872.
[9]XIA Y S.A compact cooperative recurrent neural network for computing general constrainedL1norm estimators[J]. IEEE Trans Signal Process,2009,57(9):3693-3697.
[10]KINDERLEHRER D,STAMPCCHIA G.An introduction to variational inequalities and their applications[M].New York:Academic Press,1980.
[11]BERTSEKASD P,NEDIC A,OZDAGLAR A E.Convex analysis and optimization[M].北京:清華大學(xué)出版社,2006.
(責(zé)任編輯 馬建華 英文審校 黃振坤)
A Recurrent Neural Network for Solving Least Absolute Deviation Problem
LIZhi-yong
(School of Science,Jimei University,Xiamen 361021,China)
By using the saddle theorem and the properties ofprojectionmapping,a recurrentneural network is proposed for solving least absolute deviation with linear constraints.It is shown that the proposed neural network isglobally convergent to an optimal solution.The example given in the paper demonstrates that the proposed approach provides a promising alternative for solving least absolute deviation problem.
recurrent neural network;least absolute deviation problem;linear constraints
O 221;TP 181
A
1007-7405(2015)05-0392-04
2014-10-08
2015-04-02
李智勇(1981—),男,講師,碩士,從事最優(yōu)化理論研究.