用特徵選擇和數據平衡對高維且分佈不均的二元資料做類別預測
No Thumbnail Available
Date
2022
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
近年來,機器學習 (ML) 在資料探勘和預測方面逐漸流行;與傳統的統計訓練相比,ML 有名的是在預測或分類數據方面的高準確度,但仍然存在一些限制。首先是如果資料的分布高度不平均,ML 算法會遇到準確度悖論,意思是說它只會對多數類別進行預測,我們使用採樣方法來解決這個問題。其次是面對高維資料時的計算時間,我們使用特徵選擇方法來解決這個問題。在前面的資料預處理之後,我們考慮四種 ML 算法:邏輯迴歸、K-近鄰 (KNN) 、隨機森林 (RF) 和極限梯度提升 (XGBoost) 來比較模型的性能。我們通過具有 687 個變數和 40041 個觀察值的醫療數據集急性腎損傷 (AKI) 演示了上述過程。主要結果是他們是否在 AKI 上復發。結果表明,XGBoost 在接受者操作特徵曲線下的面積 (AUC-ROC) 方面具有最佳性能。對於醫療數據集,鈉、速尿、芬太尼、布美他尼、多巴胺、胰島素、白蛋白、甘油和腎上腺素是最具影響力的藥物,CCS1581 是影響最大的疾病。
In the recent years, machine learning (ML) has become popular for data mining and predicting; compared to traditional statistical training, ML is known for its high accuracy on prediction or classifying the data. However, there still exists several limitations. First, if the distribution of the data is highly imbalanced, ML tends to meet accuracy paradox. To solve this problem, we use sampling methods. Second, dealing high dimensional data is computational demanding. We use feature selection methods to overcome this problem. After the aforementioned data preprocessing, we consider four ML algorithms: logistic regression, K-Nearest Neighbor (KNN), Random Forest (RF) and Extreme Gradient Boosting (XGBoost) to compare the performance of the model. We demonstrate the above procedure via a medical dataset Acute Kidney Injury (AKI) with 687 variables and 40041 observations. The main outcome is whether they have recurrence on AKI. The results shows that XGBoost has the best performance in terms of the area under the curve of receiver operating characteristic curve (AUC-ROC). For the medical dataset, Sodium, Furosemide, Fentanyl, Bumetanide, Dopamine, Insulin, Albumin, Glycerin and Epinephrine are top influential medication drugs and CCS1581 is top influential disease.
In the recent years, machine learning (ML) has become popular for data mining and predicting; compared to traditional statistical training, ML is known for its high accuracy on prediction or classifying the data. However, there still exists several limitations. First, if the distribution of the data is highly imbalanced, ML tends to meet accuracy paradox. To solve this problem, we use sampling methods. Second, dealing high dimensional data is computational demanding. We use feature selection methods to overcome this problem. After the aforementioned data preprocessing, we consider four ML algorithms: logistic regression, K-Nearest Neighbor (KNN), Random Forest (RF) and Extreme Gradient Boosting (XGBoost) to compare the performance of the model. We demonstrate the above procedure via a medical dataset Acute Kidney Injury (AKI) with 687 variables and 40041 observations. The main outcome is whether they have recurrence on AKI. The results shows that XGBoost has the best performance in terms of the area under the curve of receiver operating characteristic curve (AUC-ROC). For the medical dataset, Sodium, Furosemide, Fentanyl, Bumetanide, Dopamine, Insulin, Albumin, Glycerin and Epinephrine are top influential medication drugs and CCS1581 is top influential disease.
Description
Keywords
機器學習, 邏輯迴歸, K-近鄰, 隨機森林, 極限梯度提升, 不平衡, 準確率悖論, 採樣, 高維, 特徵選擇, machine learning, logistic regression, K-Nearest Neighbor, Random Forest, Extreme Gradient Boosting, imbalanced, accuracy paradox, sampling, high dimensional, feature selection