Feature selection for fault detection systems: application to the Tennessee Eastman process

作者: Brigitte Chebel-Morello , Simon Malinowski , Hafida Senoussi

DOI: 10.1007/S10489-015-0694-6

关键词:

摘要: In fault detection systems, a massive amount of data gathered from the life-cycle equipment is often used to learn models or classifiers that aims at diagnosing different kinds errors failures. Among this huge quantity information, some features (or sets features) are more correlated with kind failure than another. The presence irrelevant might affect performance classifier. To improve system, feature selection hence key step. We propose in paper an algorithm named STRASS, which detecting relevant for classification purposes. certain cases, when there exists strong correlation between and associated class, conventional algorithms fail selecting most features. order cope problem, STRASS uses k-way class select assess we apply it on simulated collected Tennessee Eastman chemical plant simulator. process (TEP) has been many studies three specific faults not well discriminated algorithms. results obtained by compared those reference show selected always classifier whole set original better other

参考文章(40)
Thomas G. Dietterich, Hussein Almuallim, Learning with many irrelevant features national conference on artificial intelligence. pp. 547- 552 ,(1991)
Mostafa Noruzi Nashalji, Mahdi Aliyari Shoorehdeli, Mohammad Teshnehlab, Fault Detection of the Tennessee Eastman Process Using Improved PCA and Neural Classifier winter simulation conference. pp. 41- 50 ,(2010) , 10.1007/978-3-642-11282-9_5
Kenji Kira, Larry A. Rendell, The feature selection problem: traditional methods and a new algorithm national conference on artificial intelligence. pp. 129- 134 ,(1992)
Ling Wang, Jinshou Yu, Fault feature selection based on modified binary PSO with mutation and its application in chemical process fault diagnosis international conference on natural computation. pp. 832- 840 ,(2005) , 10.1007/11539902_102
Tomasz Imielinski, Arun N. Swami, Balakrishna R. Iyer, Rakesh Agrawal, Sakti P. Ghosh, An Interval Classifier for Database Mining Applications very large data bases. pp. 560- 573 ,(1992)
Manoranjan Dash, Huan Liu, Hiroshi Motoda, Consistency Based Feature Selection pacific asia conference on knowledge discovery and data mining. pp. 98- 109 ,(2000) , 10.1007/3-540-45571-X_12
Narendra, Fukunaga, A Branch and Bound Algorithm for Feature Subset Selection IEEE Transactions on Computers. ,vol. 26, pp. 917- 922 ,(1977) , 10.1109/TC.1977.1674939
Igor Kononenko, Edvard Šimec, Marko Robnik-Šikonja, Overcoming the Myopia of Inductive Learning Algorithms with RELIEFF Applied Intelligence. ,vol. 7, pp. 39- 55 ,(1997) , 10.1023/A:1008280620621
Mark Andrew Hall, Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning international conference on machine learning. pp. 359- 366 ,(2000)
Russell Greiner, Thomas Petsche, Stephen José Hanson, Computational learning theory and natural learning systems MIT Press. ,(1997)