江琨 丁學(xué)明
摘 要:現(xiàn)代數(shù)字化工業(yè)生產(chǎn)中,制造、組裝和測試過程會產(chǎn)生大量數(shù)據(jù),這些數(shù)據(jù)中隱藏著決定產(chǎn)品質(zhì)量的信息和知識。使用傳統(tǒng)抽檢手段發(fā)現(xiàn)質(zhì)量問題后再加以修改往往為時已晚。數(shù)據(jù)挖掘中用生產(chǎn)參數(shù)預(yù)測產(chǎn)品質(zhì)量,可以預(yù)先獲取產(chǎn)品質(zhì)量信息,據(jù)此進(jìn)行調(diào)整以提高產(chǎn)品質(zhì)量。采用CRISP-DM流程,使用集成學(xué)習(xí)算法(隨機(jī)森林、XGBoost),利用回歸與分類模型進(jìn)行數(shù)據(jù)挖掘,經(jīng)參數(shù)調(diào)節(jié)獲得精確的優(yōu)化模型,在生產(chǎn)中運(yùn)用該模型有助于提升產(chǎn)品質(zhì)量。
關(guān)鍵詞:數(shù)據(jù)挖掘;CRISP-DM;質(zhì)量預(yù)測;集成學(xué)習(xí);隨機(jī)森林;XGboost
DOI:10. 11907/rjdk. 181535
中圖分類號:TP319文獻(xiàn)標(biāo)識碼:A文章編號:1672-7800(2019)001-0124-04
Abstract: Huge amounts of data in manufacturing, assembly and testing are generated and stored in modern digitalized industrial production. The process data contains inherent knowledge and information, which determines the final quality that needs to be extracted by data mining. In the process of detecting product quality, it is usually late to fix after finding products with poor quality. In data mining, by using process data we can predict product quality in advance and make modifications. In this paper, by using CRISP-DM methodology and ensemble methods (Random Forests and XGBoost), we made a precise quality regression and classification prediction. Accurate optimized models are gained after parameter tuning. These models would be beneficial for improving product quality in practice.
Key Words:data mining; CRISP-DM; quality prediction; ensemble learning; random forests; XGBoost
0 引言
國務(wù)院提出的《中國制造2025》計劃中,明確要求加快發(fā)展智能裝備和產(chǎn)品,推動制造過程智能化,重點(diǎn)建設(shè)數(shù)字化工廠[1]。數(shù)據(jù)挖掘已廣泛應(yīng)用于數(shù)字化工廠的產(chǎn)品質(zhì)量評估中[2]。Rostami等[3]將數(shù)據(jù)挖掘用于質(zhì)量描述、質(zhì)量分類、質(zhì)量預(yù)測和參數(shù)優(yōu)化,使用支持向量機(jī)進(jìn)行質(zhì)量評估。Chien等[4]使用k-平均聚類和決策樹預(yù)測提高半導(dǎo)體的生產(chǎn)合格率。Sim等[5]使用貝葉斯網(wǎng)絡(luò)分析PCB板制造中不合格產(chǎn)品的質(zhì)量缺陷緣由。Liu等[6]使用人工神經(jīng)網(wǎng)絡(luò)進(jìn)行電力能源消耗的時序分析。Tsai等[7] 使用決策樹和隨機(jī)森林幫助提升顯示材料合格率。蔣晉文等[8]使用XGBoost算法進(jìn)行制造業(yè)質(zhì)量預(yù)測,并與其它集成學(xué)習(xí)算法進(jìn)行比較。
跨行業(yè)數(shù)據(jù)挖掘標(biāo)準(zhǔn)流程[9](Cross Industry Standard Process for Data Mining,CRISP-DM)是實(shí)際應(yīng)用中最常使用的一種數(shù)據(jù)挖掘流程模型。CRISP-DM將數(shù)據(jù)挖掘流程分為商業(yè)理解、數(shù)據(jù)理解、數(shù)據(jù)準(zhǔn)備、數(shù)據(jù)建模、模型評估、部署6個步驟。 CRISP-DM流程如圖1所示,各步驟間的順序并非一成不變,根據(jù)情況經(jīng)常有循環(huán)往復(fù)出現(xiàn)。
1 算法描述
1.1 集成學(xué)習(xí)
集成學(xué)習(xí)(ensemble learning)通過構(gòu)建并結(jié)合多個學(xué)習(xí)器完成學(xué)習(xí)任務(wù)。如果各學(xué)習(xí)器類型相同,則稱為基學(xué)習(xí)器(base learner),基學(xué)習(xí)器常用的學(xué)習(xí)算法有決策樹算法、BP神經(jīng)網(wǎng)絡(luò)算法等[10]。分類問題通常使用投票法,回歸問題通常使用加權(quán)平均法,將它們集合作為集成學(xué)習(xí)器輸出[11]。
(2)個體學(xué)習(xí)器串行,如Boosting算法[13]。如圖3所示,其中第[n]個學(xué)習(xí)器[fn]的權(quán)重Wn由之前一個學(xué)習(xí)器[fn-1]的表現(xiàn)所決定,使之前學(xué)習(xí)器中做錯的訓(xùn)練樣本在后續(xù)學(xué)習(xí)器中受到更多關(guān)注,以減小模型偏差[14]。
1.2 隨機(jī)森林
隨機(jī)森林[15]是一種改進(jìn)Bagging算法,在隨機(jī)采樣基礎(chǔ)上進(jìn)一步隨機(jī)選取屬性,從而提高基學(xué)習(xí)器的多樣性,提升集成學(xué)習(xí)器的泛化能力。在構(gòu)建基學(xué)習(xí)器決策樹時,隨機(jī)森林隨機(jī)選取特征集合中的一個子集,隨后從這個子集中選取最佳特征進(jìn)行劃分。假設(shè)數(shù)據(jù)集一共有[d]個特征,每個基學(xué)習(xí)器包含[k]個特征。如果[k=d],則基學(xué)習(xí)器等同于傳統(tǒng)決策樹,通常情況下推薦[k=log2d]。
1.3 XGBoost
XGBoost(eXtreme Gradient Boosting)[16]是一種基于梯度提升(Gradient Boosting)[17]的改進(jìn)集成算法。Gradient Boosting使用損失函數(shù)的負(fù)梯度[-?(L(yi,f(xi)))?f(xi)]作為殘差的近似值,擬合一個回歸樹。XGBoost進(jìn)一步對損失函數(shù)作二階泰勒展開,并在目標(biāo)函數(shù)中添加正則項,控制模型的復(fù)雜度,防止過擬合。此外,XGBoost和隨機(jī)森林一樣對特征集合進(jìn)行抽樣,增加樣本多樣性,降低過擬合。雖然XGBoost中基學(xué)習(xí)器仍然是串行產(chǎn)生,但是在基學(xué)習(xí)器,即決策樹內(nèi)部實(shí)現(xiàn)并行,從而大大縮短模型訓(xùn)練時間。
[5] SIM H, CHOI D, KIM C O. A data mining approach to the causal analysis of product faults in multi-stage PCB manufacturing[J]. International Journal of Precision Engineering and Manufacturing, 2014, 15(8):1563-1573.
[6] LIU H,YAO Z,EKLUND T,et al. Electricity consumption time series profiling: a data mining application in energy industry: advances in data Mining. applications and theoretical aspects[C]. Springer Berlin Heidelberg, 2012.
[7] TSAI T L,LEE C Y. Data mining for yield improvement of photo spacer process in color filter manufacturing[J]. Procedia Manufacturing, 2017(11):1958-1965.
[8] 蔣晉,劉偉. XGBoost算法在制造業(yè)質(zhì)量預(yù)測中的應(yīng)用[J]. 智能計算機(jī)與應(yīng)用, 2017(6):58-60.
[9] SHEARER C. The CRISP-DM model: the new blueprint for data mining[J]. The Journal of Data Warehousing, 2000, 5(4):13-22.
[10] 周志華. 機(jī)器學(xué)習(xí)[M]. 北京:清華大學(xué)出版社, 2016:171-173.
[11] ZHOU Z H. Ensemble learning[J]. Encyclopedia of biometrics, 2015(2):411-416.
[12] BREIMAN L. Bagging predictors[J]. Machine Learning,1996,24(2):123-140.
[13] ZHOU Z. Ensemble methods: foundations and algorithms[M]. Boca Raton: CRC Press, 2012.
[14] DIETTERICH T G. Ensemble methods in machine learning: multiple classifier systems[C]. Springer Berlin Heidelberg, 2000.
[15] BREIMAN L. Random forests[J]. Machine Learning, 2001(45):5-32.
[16] CHEN T, GUESTRIN C. XGBoost: a scalable tree boosting system[C]. Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York, NY, USA: ACM, 2016: 785-794.
[17] FRIEDMAN J. Greedy function approximation: a gradient boosting machine[J]. The Annals of Statistics, 2000(29):49-55.
[18] NIELSEN D. Tree boosting with XGBoost why does XGBoost win“ Every” machine learning competition[C]. NTNU, 2016.
[19] CHAWLA N V, BOWYER K W, HALL L O, et al. SMOTE: synthetic minority over-sampling technique[J]. Journal of Artificial Intelligence Research, 2002(5):321-357.
[20] GUO H, LI Y, SHANG J, et al. Learning from class-imbalanced data: review of methods and applications[J]. Expert Systems with Applications, 2017(73):220-239.
[21] GóMEZ-RíOS A,LUENGO J,HERRERA F. A study on the noise label influence in boosting algorithms: AdaBoost, GBM and XGBoost[C]. Hybrid Artificial Intelligent Systems. Cham: Springer International Publishing, 2017: 268-280.
(責(zé)任編輯:杜能鋼)