diff --git "a/desc_questions_train_final.csv" "b/desc_questions_train_final.csv" deleted file mode 100644--- "a/desc_questions_train_final.csv" +++ /dev/null @@ -1,366 +0,0 @@ -Chart;description;Questions -ObesityDataSet_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition FAF <= 2.0 and the second with the condition Height <= 1.72.;['It is clear that variable Age is one of the three most relevant features.', 'The variable TUE seems to be one of the two most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CH2O is the first most discriminative variable regarding the class.', 'Variable TUE is one of the most relevant variables.', 'Variable Weight seems to be relevant for the majority of mining tasks.', 'Variables TUE and Age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 90%.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The variable Age seems to be one of the five most relevant features.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Obesity_Type_II.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (A,B) as Obesity_Type_II for any k ≤ 370.', 'Considering that A=True<=>[FAF <= 2.0] and B=True<=>[Height <= 1.72], it is possible to state that KNN algorithm classifies (not A, B) as Obesity_Type_I for any k ≤ 840.'] -ObesityDataSet_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -ObesityDataSet_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -ObesityDataSet_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -ObesityDataSet_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -ObesityDataSet_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -ObesityDataSet_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 10 and 30%.'] -ObesityDataSet_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables CH2O or NCP can be discarded without losing information.', 'The variable FAF can be discarded without risking losing information.', 'Variables TUE and FAF are redundant, but we can’t say the same for the pair Height and FCVC.', 'Variables Weight and FAF are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Weight seems to be relevant for the majority of mining tasks.', 'Variables Age and Height seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable FAF might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable NCP previously than variable Weight.'] -ObesityDataSet_boxplots.png;A set of boxplots of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['Variable FCVC is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable TUE shows some outliers, but we can’t be sure of the same for variable NCP.', 'Outliers seem to be a problem in the dataset.', 'Variable FAF shows a high number of outlier values.', 'Variable Age doesn’t have any outliers.', 'Variable TUE presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -ObesityDataSet_histograms_symbolic.png;A set of bar charts of the variables ['CAEC', 'CALC', 'MTRANS', 'Gender', 'family_history_with_overweight', 'FAVC', 'SMOKE', 'SCC'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable FAVC can be seen as ordinal.', 'The variable FAVC can be seen as ordinal without losing information.', 'Considering the common semantics for FAVC and CAEC variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for family_history_with_overweight variable, dummification would be the most adequate encoding.', 'The variable CALC can be coded as ordinal without losing information.', 'Feature generation based on variable MTRANS seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of CAEC seems to be promising.', 'Given the usual semantics of SMOKE variable, dummification would have been a better codification.', 'It is better to drop the variable CAEC than removing all records with missing values.', 'Not knowing the semantics of CALC variable, dummification could have been a more adequate codification.'] -ObesityDataSet_class_histogram.png;A bar chart showing the distribution of the target variable NObeyesdad.;['Balancing this dataset would be mandatory to improve the results.'] -ObesityDataSet_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -ObesityDataSet_histograms_numeric.png;A set of histograms of the variables ['Age', 'Height', 'Weight', 'FCVC', 'NCP', 'CH2O', 'FAF', 'TUE'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Age can be seen as ordinal.', 'The variable Weight can be seen as ordinal without losing information.', 'Variable NCP is balanced.', 'It is clear that variable FAF shows some outliers, but we can’t be sure of the same for variable FCVC.', 'Outliers seem to be a problem in the dataset.', 'Variable TUE shows a high number of outlier values.', 'Variable FAF doesn’t have any outliers.', 'Variable Height presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for TUE and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Height variable, dummification would be the most adequate encoding.', 'The variable NCP can be coded as ordinal without losing information.', 'Feature generation based on variable Height seems to be promising.', 'Feature generation based on the use of variable FAF wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of FAF variable, dummification would have been a better codification.', 'It is better to drop the variable TUE than removing all records with missing values.', 'Not knowing the semantics of Weight variable, dummification could have been a more adequate codification.'] -customer_segmentation_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Family_Size <= 2.5 and the second with the condition Work_Experience <= 9.5.;['It is clear that variable Work_Experience is one of the four most relevant features.', 'The variable Work_Experience seems to be one of the three most relevant features.', 'The variable Work_Experience discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Work_Experience is the second most discriminative variable regarding the class.', 'Variable Work_Experience is one of the most relevant variables.', 'Variable Work_Experience seems to be relevant for the majority of mining tasks.', 'Variables Work_Experience and Family_Size seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives reported in the same tree is 10.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as D for any k ≤ 11.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], the Decision Tree presented classifies (not A, not B) as C.', 'Considering that A=True<=>[Family_Size <= 2.5] and B=True<=>[Work_Experience <= 9.5], it is possible to state that KNN algorithm classifies (A,B) as A for any k ≤ 249.'] -customer_segmentation_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -customer_segmentation_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -customer_segmentation_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -customer_segmentation_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -customer_segmentation_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -customer_segmentation_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] -customer_segmentation_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'Work_Experience', 'Family_Size'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Age or Family_Size can be discarded without losing information.', 'The variable Family_Size can be discarded without risking losing information.', 'Variables Age and Family_Size seem to be useful for classification tasks.', 'Variables Work_Experience and Family_Size are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Family_Size seems to be relevant for the majority of mining tasks.', 'Variables Family_Size and Age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Family_Size might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Family_Size previously than variable Age.'] -customer_segmentation_boxplots.png;A set of boxplots of the variables ['Age', 'Work_Experience', 'Family_Size'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Work_Experience.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows a high number of outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Work_Experience presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -customer_segmentation_histograms_symbolic.png;A set of bar charts of the variables ['Profession', 'Spending_Score', 'Var_1', 'Gender', 'Ever_Married', 'Graduated'].;['All variables, but the class, should be dealt with as date.', 'The variable Spending_Score can be seen as ordinal.', 'The variable Profession can be seen as ordinal without losing information.', 'Considering the common semantics for Profession and Spending_Score variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Var_1 variable, dummification would be the most adequate encoding.', 'The variable Profession can be coded as ordinal without losing information.', 'Feature generation based on variable Var_1 seems to be promising.', 'Feature generation based on the use of variable Profession wouldn’t be useful, but the use of Spending_Score seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Graduated than removing all records with missing values.', 'Not knowing the semantics of Graduated variable, dummification could have been a more adequate codification.'] -customer_segmentation_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Ever_Married', 'Graduated', 'Profession', 'Work_Experience', 'Family_Size', 'Var_1'].;['Discarding variable Ever_Married would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Graduated seems to be promising.', 'It is better to drop the variable Var_1 than removing all records with missing values.'] -customer_segmentation_class_histogram.png;A bar chart showing the distribution of the target variable Segmentation.;['Balancing this dataset would be mandatory to improve the results.'] -customer_segmentation_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -customer_segmentation_histograms_numeric.png;A set of histograms of the variables ['Age', 'Work_Experience', 'Family_Size'].;['All variables, but the class, should be dealt with as date.', 'The variable Family_Size can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Work_Experience is balanced.', 'It is clear that variable Family_Size shows some outliers, but we can’t be sure of the same for variable Work_Experience.', 'Outliers seem to be a problem in the dataset.', 'Variable Work_Experience shows some outlier values.', 'Variable Work_Experience doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Family_Size and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Work_Experience variable, dummification would be the most adequate encoding.', 'The variable Work_Experience can be coded as ordinal without losing information.', 'Feature generation based on variable Family_Size seems to be promising.', 'Feature generation based on the use of variable Family_Size wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Family_Size variable, dummification would have been a better codification.', 'It is better to drop the variable Work_Experience than removing all records with missing values.', 'Not knowing the semantics of Work_Experience variable, dummification could have been a more adequate codification.'] -urinalysis_tests_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 0.1 and the second with the condition pH <= 5.5.;['It is clear that variable pH is one of the three most relevant features.', 'The variable Age seems to be one of the four most relevant features.', 'The variable Age discriminates between the target values, as shown in the decision tree.', 'It is possible to state that pH is the first most discriminative variable regarding the class.', 'Variable Specific Gravity is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of False Positives reported in the same tree is 10.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that KNN algorithm classifies (A,B) as POSITIVE for any k ≤ 215.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], the Decision Tree presented classifies (not A, B) as POSITIVE.', 'Considering that A=True<=>[Age <= 0.1] and B=True<=>[pH <= 5.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as NEGATIVE.'] -urinalysis_tests_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -urinalysis_tests_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -urinalysis_tests_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -urinalysis_tests_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] -urinalysis_tests_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -urinalysis_tests_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -urinalysis_tests_pca.png;A bar chart showing the explained variance ratio of 3 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -urinalysis_tests_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'pH', 'Specific Gravity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables pH or Specific Gravity can be discarded without losing information.', 'The variable Specific Gravity can be discarded without risking losing information.', 'Variables pH and Age seem to be useful for classification tasks.', 'Variables pH and Specific Gravity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Specific Gravity seems to be relevant for the majority of mining tasks.', 'Variables Specific Gravity and pH seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable pH might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Specific Gravity previously than variable Age.'] -urinalysis_tests_boxplots.png;A set of boxplots of the variables ['Age', 'pH', 'Specific Gravity'].;['Variable pH is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Age shows some outliers, but we can’t be sure of the same for variable Specific Gravity.', 'Outliers seem to be a problem in the dataset.', 'Variable pH shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Specific Gravity presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -urinalysis_tests_histograms_symbolic.png;A set of bar charts of the variables ['Color', 'Transparency', 'Glucose', 'Protein', 'Epithelial Cells', 'Mucous Threads', 'Amorphous Urates', 'Bacteria', 'Gender'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Transparency can be seen as ordinal.', 'The variable Mucous Threads can be seen as ordinal without losing information.', 'Considering the common semantics for Amorphous Urates and Color variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Glucose variable, dummification would be the most adequate encoding.', 'The variable Transparency can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Protein wouldn’t be useful, but the use of Color seems to be promising.', 'Given the usual semantics of Bacteria variable, dummification would have been a better codification.', 'It is better to drop the variable Mucous Threads than removing all records with missing values.', 'Not knowing the semantics of Transparency variable, dummification could have been a more adequate codification.'] -urinalysis_tests_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Color'].;['Discarding variable Color would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 40% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Color seems to be promising.', 'It is better to drop the variable Color than removing all records with missing values.'] -urinalysis_tests_class_histogram.png;A bar chart showing the distribution of the target variable Diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] -urinalysis_tests_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -urinalysis_tests_histograms_numeric.png;A set of histograms of the variables ['Age', 'pH', 'Specific Gravity'].;['All variables, but the class, should be dealt with as date.', 'The variable Age can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable Specific Gravity shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable Specific Gravity doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for pH and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Specific Gravity variable, dummification would be the most adequate encoding.', 'The variable Age can be coded as ordinal without losing information.', 'Feature generation based on variable Specific Gravity seems to be promising.', 'Feature generation based on the use of variable Specific Gravity wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of pH variable, dummification would have been a better codification.', 'It is better to drop the variable Specific Gravity than removing all records with missing values.', 'Not knowing the semantics of Specific Gravity variable, dummification could have been a more adequate codification.'] -detect_dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Ic <= 71.01 and the second with the condition Vb <= -0.37.;['It is clear that variable Vb is one of the four most relevant features.', 'The variable Vc seems to be one of the five most relevant features.', 'The variable Ib discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Vc is the second most discriminative variable regarding the class.', 'Variable Ic is one of the most relevant variables.', 'Variable Ic seems to be relevant for the majority of mining tasks.', 'Variables Vc and Ib seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The variable Va seems to be one of the four most relevant features.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 797.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 1206.', 'Considering that A=True<=>[Ic <= 71.01] and B=True<=>[Vb <= -0.37], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 3.'] -detect_dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -detect_dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -detect_dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -detect_dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -detect_dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -detect_dataset_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -detect_dataset_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] -detect_dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Ic or Ia can be discarded without losing information.', 'The variable Ia can be discarded without risking losing information.', 'Variables Vb and Ia are redundant, but we can’t say the same for the pair Va and Ic.', 'Variables Ia and Ib are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Ic seems to be relevant for the majority of mining tasks.', 'Variables Vc and Ic seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Va might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Vc previously than variable Ic.'] -detect_dataset_boxplots.png;A set of boxplots of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['Variable Vb is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Ib shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Vb shows a high number of outlier values.', 'Variable Ia doesn’t have any outliers.', 'Variable Ia presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -detect_dataset_class_histogram.png;A bar chart showing the distribution of the target variable Output.;['Balancing this dataset would be mandatory to improve the results.'] -detect_dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -detect_dataset_histograms_numeric.png;A set of histograms of the variables ['Ia', 'Ib', 'Ic', 'Va', 'Vb', 'Vc'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Ic can be seen as ordinal.', 'The variable Ib can be seen as ordinal without losing information.', 'Variable Va is balanced.', 'It is clear that variable Vb shows some outliers, but we can’t be sure of the same for variable Vc.', 'Outliers seem to be a problem in the dataset.', 'Variable Ib shows some outlier values.', 'Variable Ic doesn’t have any outliers.', 'Variable Vc presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Vc and Ia variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Ic variable, dummification would be the most adequate encoding.', 'The variable Ia can be coded as ordinal without losing information.', 'Feature generation based on variable Vb seems to be promising.', 'Feature generation based on the use of variable Vb wouldn’t be useful, but the use of Ia seems to be promising.', 'Given the usual semantics of Ic variable, dummification would have been a better codification.', 'It is better to drop the variable Ic than removing all records with missing values.', 'Not knowing the semantics of Va variable, dummification could have been a more adequate codification.'] -diabetes_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition BMI <= 29.85 and the second with the condition Age <= 27.5.;['It is clear that variable Glucose is one of the five most relevant features.', 'The variable Glucose seems to be one of the three most relevant features.', 'The variable Insulin discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Age is the second most discriminative variable regarding the class.', 'Variable DiabetesPedigreeFunction is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables Age and DiabetesPedigreeFunction seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of False Negatives is lower than the number of False Positives for the presented tree.', 'The variable Insulin seems to be one of the three most relevant features.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 1.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 161.', 'Considering that A=True<=>[BMI <= 29.85] and B=True<=>[Age <= 27.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 167.'] -diabetes_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -diabetes_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -diabetes_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -diabetes_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -diabetes_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] -diabetes_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -diabetes_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 15 and 20%.'] -diabetes_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables Age or Insulin can be discarded without losing information.', 'The variable Glucose can be discarded without risking losing information.', 'Variables Pregnancies and BMI are redundant, but we can’t say the same for the pair SkinThickness and Glucose.', 'Variables BloodPressure and DiabetesPedigreeFunction are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Glucose seems to be relevant for the majority of mining tasks.', 'Variables Age and DiabetesPedigreeFunction seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Insulin might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Pregnancies previously than variable Insulin.'] -diabetes_boxplots.png;A set of boxplots of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable DiabetesPedigreeFunction shows some outliers, but we can’t be sure of the same for variable BloodPressure.', 'Outliers seem to be a problem in the dataset.', 'Variable Pregnancies shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable BloodPressure presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -diabetes_class_histogram.png;A bar chart showing the distribution of the target variable Outcome.;['Balancing this dataset would be mandatory to improve the results.'] -diabetes_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -diabetes_histograms_numeric.png;A set of histograms of the variables ['Pregnancies', 'Glucose', 'BloodPressure', 'SkinThickness', 'Insulin', 'BMI', 'DiabetesPedigreeFunction', 'Age'].;['All variables, but the class, should be dealt with as numeric.', 'The variable DiabetesPedigreeFunction can be seen as ordinal.', 'The variable BloodPressure can be seen as ordinal without losing information.', 'Variable Insulin is balanced.', 'It is clear that variable SkinThickness shows some outliers, but we can’t be sure of the same for variable BMI.', 'Outliers seem to be a problem in the dataset.', 'Variable DiabetesPedigreeFunction shows some outlier values.', 'Variable Insulin doesn’t have any outliers.', 'Variable DiabetesPedigreeFunction presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and Pregnancies variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Age variable, dummification would be the most adequate encoding.', 'The variable BloodPressure can be coded as ordinal without losing information.', 'Feature generation based on variable SkinThickness seems to be promising.', 'Feature generation based on the use of variable Glucose wouldn’t be useful, but the use of Pregnancies seems to be promising.', 'Given the usual semantics of DiabetesPedigreeFunction variable, dummification would have been a better codification.', 'It is better to drop the variable Insulin than removing all records with missing values.', 'Not knowing the semantics of DiabetesPedigreeFunction variable, dummification could have been a more adequate codification.'] -Placement_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition ssc_p <= 60.09 and the second with the condition hsc_p <= 70.24.;['It is clear that variable mba_p is one of the three most relevant features.', 'The variable mba_p seems to be one of the three most relevant features.', 'The variable degree_p discriminates between the target values, as shown in the decision tree.', 'It is possible to state that mba_p is the second most discriminative variable regarding the class.', 'Variable mba_p is one of the most relevant variables.', 'Variable ssc_p seems to be relevant for the majority of mining tasks.', 'Variables degree_p and etest_p seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The variable etest_p seems to be one of the three most relevant features.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (A,B) as Not Placed for any k ≤ 16.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], the Decision Tree presented classifies (not A, B) as Not Placed.', 'Considering that A=True<=>[ssc_p <= 60.09] and B=True<=>[hsc_p <= 70.24], it is possible to state that KNN algorithm classifies (not A, B) as Placed for any k ≤ 68.'] -Placement_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Placement_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -Placement_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Placement_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -Placement_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -Placement_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Placement_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -Placement_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables ssc_p or hsc_p can be discarded without losing information.', 'The variable ssc_p can be discarded without risking losing information.', 'Variables etest_p and ssc_p are redundant, but we can’t say the same for the pair mba_p and degree_p.', 'Variables hsc_p and degree_p are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable hsc_p seems to be relevant for the majority of mining tasks.', 'Variables mba_p and etest_p seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable degree_p might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable etest_p previously than variable ssc_p.'] -Placement_boxplots.png;A set of boxplots of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['Variable etest_p is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable etest_p shows some outliers, but we can’t be sure of the same for variable ssc_p.', 'Outliers seem to be a problem in the dataset.', 'Variable hsc_p shows a high number of outlier values.', 'Variable ssc_p doesn’t have any outliers.', 'Variable ssc_p presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Placement_histograms_symbolic.png;A set of bar charts of the variables ['hsc_s', 'degree_t', 'gender', 'ssc_b', 'hsc_b', 'workex', 'specialisation'].;['All variables, but the class, should be dealt with as numeric.', 'The variable ssc_b can be seen as ordinal.', 'The variable workex can be seen as ordinal without losing information.', 'Considering the common semantics for workex and hsc_s variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for workex variable, dummification would be the most adequate encoding.', 'The variable hsc_s can be coded as ordinal without losing information.', 'Feature generation based on variable hsc_s seems to be promising.', 'Feature generation based on the use of variable gender wouldn’t be useful, but the use of hsc_s seems to be promising.', 'Given the usual semantics of hsc_s variable, dummification would have been a better codification.', 'It is better to drop the variable specialisation than removing all records with missing values.', 'Not knowing the semantics of workex variable, dummification could have been a more adequate codification.'] -Placement_class_histogram.png;A bar chart showing the distribution of the target variable status.;['Balancing this dataset would be mandatory to improve the results.'] -Placement_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Placement_histograms_numeric.png;A set of histograms of the variables ['ssc_p', 'hsc_p', 'degree_p', 'etest_p', 'mba_p'].;['All variables, but the class, should be dealt with as binary.', 'The variable mba_p can be seen as ordinal.', 'The variable hsc_p can be seen as ordinal without losing information.', 'Variable etest_p is balanced.', 'It is clear that variable ssc_p shows some outliers, but we can’t be sure of the same for variable etest_p.', 'Outliers seem to be a problem in the dataset.', 'Variable mba_p shows a high number of outlier values.', 'Variable etest_p doesn’t have any outliers.', 'Variable mba_p presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for ssc_p and hsc_p variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mba_p variable, dummification would be the most adequate encoding.', 'The variable degree_p can be coded as ordinal without losing information.', 'Feature generation based on variable etest_p seems to be promising.', 'Feature generation based on the use of variable mba_p wouldn’t be useful, but the use of ssc_p seems to be promising.', 'Given the usual semantics of degree_p variable, dummification would have been a better codification.', 'It is better to drop the variable hsc_p than removing all records with missing values.', 'Not knowing the semantics of mba_p variable, dummification could have been a more adequate codification.'] -Liver_Patient_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Alkphos <= 211.5 and the second with the condition Sgot <= 26.5.;['It is clear that variable ALB is one of the four most relevant features.', 'The variable AG_Ratio seems to be one of the five most relevant features.', 'The variable TB discriminates between the target values, as shown in the decision tree.', 'It is possible to state that TP is the second most discriminative variable regarding the class.', 'Variable TP is one of the most relevant variables.', 'Variable Age seems to be relevant for the majority of mining tasks.', 'Variables ALB and Age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The recall for the presented tree is lower than 90%.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 77.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 1.', 'Considering that A=True<=>[Alkphos <= 211.5] and B=True<=>[Sgot <= 26.5], the Decision Tree presented classifies (not A, not B) as 2.'] -Liver_Patient_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Liver_Patient_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Liver_Patient_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Liver_Patient_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -Liver_Patient_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Liver_Patient_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Liver_Patient_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] -Liver_Patient_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables TP or Sgpt can be discarded without losing information.', 'The variable DB can be discarded without risking losing information.', 'Variables AG_Ratio and TP are redundant, but we can’t say the same for the pair Sgot and Alkphos.', 'Variables Sgot and TB are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable TB seems to be relevant for the majority of mining tasks.', 'Variables Age and Sgpt seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable DB might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable TB previously than variable AG_Ratio.'] -Liver_Patient_boxplots.png;A set of boxplots of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['Variable TP is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable TB shows some outliers, but we can’t be sure of the same for variable ALB.', 'Outliers seem to be a problem in the dataset.', 'Variable TB shows a high number of outlier values.', 'Variable AG_Ratio doesn’t have any outliers.', 'Variable Sgot presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Liver_Patient_histograms_symbolic.png;A set of bar charts of the variables ['Gender'].;['All variables, but the class, should be dealt with as binary.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Gender and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Gender wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Gender than removing all records with missing values.', 'Not knowing the semantics of Gender variable, dummification could have been a more adequate codification.'] -Liver_Patient_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['AG_Ratio'].;['Discarding variable AG_Ratio would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 30% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable AG_Ratio seems to be promising.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.'] -Liver_Patient_class_histogram.png;A bar chart showing the distribution of the target variable Selector.;['Balancing this dataset would be mandatory to improve the results.'] -Liver_Patient_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Liver_Patient_histograms_numeric.png;A set of histograms of the variables ['Age', 'TB', 'DB', 'Alkphos', 'Sgpt', 'Sgot', 'TP', 'ALB', 'AG_Ratio'].;['All variables, but the class, should be dealt with as binary.', 'The variable Sgpt can be seen as ordinal.', 'The variable Alkphos can be seen as ordinal without losing information.', 'Variable Sgpt is balanced.', 'It is clear that variable ALB shows some outliers, but we can’t be sure of the same for variable DB.', 'Outliers seem to be a problem in the dataset.', 'Variable AG_Ratio shows some outlier values.', 'Variable AG_Ratio doesn’t have any outliers.', 'Variable TB presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for AG_Ratio and Age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sgpt variable, dummification would be the most adequate encoding.', 'The variable TB can be coded as ordinal without losing information.', 'Feature generation based on variable Age seems to be promising.', 'Feature generation based on the use of variable Alkphos wouldn’t be useful, but the use of Age seems to be promising.', 'Given the usual semantics of Alkphos variable, dummification would have been a better codification.', 'It is better to drop the variable AG_Ratio than removing all records with missing values.', 'Not knowing the semantics of ALB variable, dummification could have been a more adequate codification.'] -Hotel_Reservations_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition lead_time <= 151.5 and the second with the condition no_of_special_requests <= 2.5.;['It is clear that variable no_of_special_requests is one of the five most relevant features.', 'The variable no_of_weekend_nights seems to be one of the two most relevant features.', 'The variable no_of_weekend_nights discriminates between the target values, as shown in the decision tree.', 'It is possible to state that no_of_children is the first most discriminative variable regarding the class.', 'Variable no_of_children is one of the most relevant variables.', 'Variable avg_price_per_room seems to be relevant for the majority of mining tasks.', 'Variables no_of_weekend_nights and no_of_adults seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Positives is lower than the number of True Negatives for the presented tree.', 'The specificity for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], the Decision Tree presented classifies (not A, not B) as Canceled.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], the Decision Tree presented classifies (A, not B) as Canceled.', 'Considering that A=True<=>[lead_time <= 151.5] and B=True<=>[no_of_special_requests <= 2.5], it is possible to state that KNN algorithm classifies (A,B) as Canceled for any k ≤ 9756.'] -Hotel_Reservations_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -Hotel_Reservations_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -Hotel_Reservations_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -Hotel_Reservations_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Hotel_Reservations_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Hotel_Reservations_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Hotel_Reservations_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 30%.'] -Hotel_Reservations_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables no_of_children or arrival_date can be discarded without losing information.', 'The variable avg_price_per_room can be discarded without risking losing information.', 'Variables no_of_adults and no_of_special_requests are redundant, but we can’t say the same for the pair no_of_children and lead_time.', 'Variables no_of_week_nights and no_of_weekend_nights are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable no_of_week_nights seems to be relevant for the majority of mining tasks.', 'Variables no_of_special_requests and no_of_week_nights seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable no_of_special_requests might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable no_of_special_requests previously than variable no_of_children.'] -Hotel_Reservations_boxplots.png;A set of boxplots of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['Variable no_of_adults is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable no_of_weekend_nights shows some outliers, but we can’t be sure of the same for variable no_of_children.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_special_requests shows a high number of outlier values.', 'Variable no_of_week_nights doesn’t have any outliers.', 'Variable arrival_date presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Hotel_Reservations_histograms_symbolic.png;A set of bar charts of the variables ['type_of_meal_plan', 'room_type_reserved', 'required_car_parking_space', 'arrival_year', 'repeated_guest'].;['All variables, but the class, should be dealt with as binary.', 'The variable required_car_parking_space can be seen as ordinal.', 'The variable repeated_guest can be seen as ordinal without losing information.', 'Considering the common semantics for repeated_guest and type_of_meal_plan variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for room_type_reserved variable, dummification would be the most adequate encoding.', 'The variable type_of_meal_plan can be coded as ordinal without losing information.', 'Feature generation based on variable room_type_reserved seems to be promising.', 'Feature generation based on the use of variable arrival_year wouldn’t be useful, but the use of type_of_meal_plan seems to be promising.', 'Given the usual semantics of type_of_meal_plan variable, dummification would have been a better codification.', 'It is better to drop the variable repeated_guest than removing all records with missing values.', 'Not knowing the semantics of arrival_year variable, dummification could have been a more adequate codification.'] -Hotel_Reservations_class_histogram.png;A bar chart showing the distribution of the target variable booking_status.;['Balancing this dataset would be mandatory to improve the results.'] -Hotel_Reservations_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Hotel_Reservations_histograms_numeric.png;A set of histograms of the variables ['no_of_adults', 'no_of_children', 'no_of_weekend_nights', 'no_of_week_nights', 'lead_time', 'arrival_month', 'arrival_date', 'avg_price_per_room', 'no_of_special_requests'].;['All variables, but the class, should be dealt with as date.', 'The variable arrival_date can be seen as ordinal.', 'The variable lead_time can be seen as ordinal without losing information.', 'Variable arrival_date is balanced.', 'It is clear that variable no_of_week_nights shows some outliers, but we can’t be sure of the same for variable lead_time.', 'Outliers seem to be a problem in the dataset.', 'Variable no_of_adults shows some outlier values.', 'Variable no_of_weekend_nights doesn’t have any outliers.', 'Variable avg_price_per_room presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for avg_price_per_room and no_of_adults variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for no_of_children variable, dummification would be the most adequate encoding.', 'The variable no_of_children can be coded as ordinal without losing information.', 'Feature generation based on variable no_of_adults seems to be promising.', 'Feature generation based on the use of variable no_of_adults wouldn’t be useful, but the use of no_of_children seems to be promising.', 'Given the usual semantics of no_of_children variable, dummification would have been a better codification.', 'It is better to drop the variable no_of_children than removing all records with missing values.', 'Not knowing the semantics of no_of_special_requests variable, dummification could have been a more adequate codification.'] -StressLevelDataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition basic_needs <= 3.5 and the second with the condition bullying <= 1.5.;['It is clear that variable self_esteem is one of the four most relevant features.', 'The variable self_esteem seems to be one of the three most relevant features.', 'The variable living_conditions discriminates between the target values, as shown in the decision tree.', 'It is possible to state that headache is the second most discriminative variable regarding the class.', 'Variable headache is one of the most relevant variables.', 'Variable bullying seems to be relevant for the majority of mining tasks.', 'Variables headache and depression seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of True Negatives for the presented tree.', 'The number of False Negatives reported in the same tree is 50.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 271.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 1.', 'Considering that A=True<=>[basic_needs <= 3.5] and B=True<=>[bullying <= 1.5], it is possible to state that KNN algorithm classifies (not A, not B) as 2 for any k ≤ 271.'] -StressLevelDataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -StressLevelDataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -StressLevelDataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -StressLevelDataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -StressLevelDataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] -StressLevelDataset_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 9 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] -StressLevelDataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables depression or basic_needs can be discarded without losing information.', 'The variable breathing_problem can be discarded without risking losing information.', 'Variables bullying and study_load are redundant, but we can’t say the same for the pair breathing_problem and living_conditions.', 'Variables headache and living_conditions are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable living_conditions seems to be relevant for the majority of mining tasks.', 'Variables sleep_quality and self_esteem seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable self_esteem might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable study_load previously than variable depression.'] -StressLevelDataset_boxplots.png;A set of boxplots of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['Variable headache is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable self_esteem shows some outliers, but we can’t be sure of the same for variable living_conditions.', 'Outliers seem to be a problem in the dataset.', 'Variable self_esteem shows a high number of outlier values.', 'Variable bullying doesn’t have any outliers.', 'Variable sleep_quality presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -StressLevelDataset_histograms_symbolic.png;A set of bar charts of the variables ['mental_health_history'].;['All variables, but the class, should be dealt with as binary.', 'The variable mental_health_history can be seen as ordinal.', 'The variable mental_health_history can be seen as ordinal without losing information.', 'Considering the common semantics for mental_health_history and variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for mental_health_history variable, dummification would be the most adequate encoding.', 'The variable mental_health_history can be coded as ordinal without losing information.', 'Feature generation based on variable mental_health_history seems to be promising.', 'Feature generation based on the use of variable mental_health_history wouldn’t be useful, but the use of seems to be promising.', 'Given the usual semantics of mental_health_history variable, dummification would have been a better codification.', 'It is better to drop the variable mental_health_history than removing all records with missing values.', 'Not knowing the semantics of mental_health_history variable, dummification could have been a more adequate codification.'] -StressLevelDataset_class_histogram.png;A bar chart showing the distribution of the target variable stress_level.;['Balancing this dataset would be mandatory to improve the results.'] -StressLevelDataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -StressLevelDataset_histograms_numeric.png;A set of histograms of the variables ['anxiety_level', 'self_esteem', 'depression', 'headache', 'sleep_quality', 'breathing_problem', 'living_conditions', 'basic_needs', 'study_load', 'bullying'].;['All variables, but the class, should be dealt with as date.', 'The variable sleep_quality can be seen as ordinal.', 'The variable sleep_quality can be seen as ordinal without losing information.', 'Variable sleep_quality is balanced.', 'It is clear that variable living_conditions shows some outliers, but we can’t be sure of the same for variable breathing_problem.', 'Outliers seem to be a problem in the dataset.', 'Variable basic_needs shows a high number of outlier values.', 'Variable headache doesn’t have any outliers.', 'Variable breathing_problem presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for sleep_quality and anxiety_level variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for study_load variable, dummification would be the most adequate encoding.', 'The variable anxiety_level can be coded as ordinal without losing information.', 'Feature generation based on variable living_conditions seems to be promising.', 'Feature generation based on the use of variable breathing_problem wouldn’t be useful, but the use of anxiety_level seems to be promising.', 'Given the usual semantics of self_esteem variable, dummification would have been a better codification.', 'It is better to drop the variable bullying than removing all records with missing values.', 'Not knowing the semantics of sleep_quality variable, dummification could have been a more adequate codification.'] -WineQT_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition density <= 1.0 and the second with the condition chlorides <= 0.08.;['It is clear that variable residual sugar is one of the four most relevant features.', 'The variable pH seems to be one of the three most relevant features.', 'The variable residual sugar discriminates between the target values, as shown in the decision tree.', 'It is possible to state that alcohol is the second most discriminative variable regarding the class.', 'Variable total sulfur dioxide is one of the most relevant variables.', 'Variable sulphates seems to be relevant for the majority of mining tasks.', 'Variables pH and sulphates seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of False Positives reported in the same tree is 10.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The variable free sulfur dioxide seems to be one of the five most relevant features.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that KNN algorithm classifies (A, not B) as 8 for any k ≤ 154.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], it is possible to state that Naive Bayes algorithm classifies (not A, B), as 5.', 'Considering that A=True<=>[density <= 1.0] and B=True<=>[chlorides <= 0.08], the Decision Tree presented classifies (not A, not B) as 3.'] -WineQT_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -WineQT_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -WineQT_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -WineQT_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -WineQT_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -WineQT_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 25%.'] -WineQT_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables sulphates or free sulfur dioxide can be discarded without losing information.', 'The variable density can be discarded without risking losing information.', 'Variables fixed acidity and citric acid are redundant, but we can’t say the same for the pair free sulfur dioxide and density.', 'Variables fixed acidity and free sulfur dioxide are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable total sulfur dioxide seems to be relevant for the majority of mining tasks.', 'Variables chlorides and volatile acidity seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable fixed acidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable residual sugar previously than variable free sulfur dioxide.'] -WineQT_boxplots.png;A set of boxplots of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['Variable free sulfur dioxide is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable free sulfur dioxide shows some outliers, but we can’t be sure of the same for variable citric acid.', 'Outliers seem to be a problem in the dataset.', 'Variable total sulfur dioxide shows some outlier values.', 'Variable pH doesn’t have any outliers.', 'Variable alcohol presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -WineQT_class_histogram.png;A bar chart showing the distribution of the target variable quality.;['Balancing this dataset would be mandatory to improve the results.'] -WineQT_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -WineQT_histograms_numeric.png;A set of histograms of the variables ['fixed acidity', 'volatile acidity', 'citric acid', 'residual sugar', 'chlorides', 'free sulfur dioxide', 'total sulfur dioxide', 'density', 'pH', 'sulphates', 'alcohol'].;['All variables, but the class, should be dealt with as numeric.', 'The variable chlorides can be seen as ordinal.', 'The variable citric acid can be seen as ordinal without losing information.', 'Variable free sulfur dioxide is balanced.', 'It is clear that variable residual sugar shows some outliers, but we can’t be sure of the same for variable alcohol.', 'Outliers seem to be a problem in the dataset.', 'Variable residual sugar shows some outlier values.', 'Variable chlorides doesn’t have any outliers.', 'Variable density presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for total sulfur dioxide and fixed acidity variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for sulphates variable, dummification would be the most adequate encoding.', 'The variable pH can be coded as ordinal without losing information.', 'Feature generation based on variable density seems to be promising.', 'Feature generation based on the use of variable alcohol wouldn’t be useful, but the use of fixed acidity seems to be promising.', 'Given the usual semantics of sulphates variable, dummification would have been a better codification.', 'It is better to drop the variable volatile acidity than removing all records with missing values.', 'Not knowing the semantics of density variable, dummification could have been a more adequate codification.'] -loan_data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Loan_Amount_Term <= 420.0 and the second with the condition ApplicantIncome <= 1519.0.;['It is clear that variable ApplicantIncome is one of the four most relevant features.', 'The variable Loan_Amount_Term seems to be one of the five most relevant features.', 'The variable ApplicantIncome discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Loan_Amount_Term is the first most discriminative variable regarding the class.', 'Variable LoanAmount is one of the most relevant variables.', 'Variable Loan_Amount_Term seems to be relevant for the majority of mining tasks.', 'Variables LoanAmount and Loan_Amount_Term seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of False Positives reported in the same tree is 30.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The specificity for the presented tree is lower than 90%.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, B), as Y.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], the Decision Tree presented classifies (A,B) as N.', 'Considering that A=True<=>[Loan_Amount_Term <= 420.0] and B=True<=>[ApplicantIncome <= 1519.0], it is possible to state that KNN algorithm classifies (not A, B) as N for any k ≤ 3.'] -loan_data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -loan_data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -loan_data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -loan_data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -loan_data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -loan_data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -loan_data_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 10 and 25%.'] -loan_data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables ApplicantIncome or LoanAmount can be discarded without losing information.', 'The variable ApplicantIncome can be discarded without risking losing information.', 'Variables Loan_Amount_Term and ApplicantIncome are redundant.', 'Variables LoanAmount and CoapplicantIncome are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable LoanAmount seems to be relevant for the majority of mining tasks.', 'Variables ApplicantIncome and Loan_Amount_Term seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable CoapplicantIncome might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ApplicantIncome previously than variable CoapplicantIncome.'] -loan_data_boxplots.png;A set of boxplots of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['Variable CoapplicantIncome is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable LoanAmount shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable LoanAmount shows a high number of outlier values.', 'Variable LoanAmount doesn’t have any outliers.', 'Variable ApplicantIncome presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -loan_data_histograms_symbolic.png;A set of bar charts of the variables ['Dependents', 'Property_Area', 'Gender', 'Married', 'Education', 'Self_Employed', 'Credit_History'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Gender can be seen as ordinal.', 'The variable Gender can be seen as ordinal without losing information.', 'Considering the common semantics for Married and Dependents variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Education variable, dummification would be the most adequate encoding.', 'The variable Gender can be coded as ordinal without losing information.', 'Feature generation based on variable Education seems to be promising.', 'Feature generation based on the use of variable Credit_History wouldn’t be useful, but the use of Dependents seems to be promising.', 'Given the usual semantics of Education variable, dummification would have been a better codification.', 'It is better to drop the variable Married than removing all records with missing values.', 'Not knowing the semantics of Property_Area variable, dummification could have been a more adequate codification.'] -loan_data_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Gender', 'Dependents', 'Self_Employed', 'Loan_Amount_Term', 'Credit_History'].;['Discarding variable Dependents would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Credit_History seems to be promising.', 'It is better to drop the variable Gender than removing all records with missing values.'] -loan_data_class_histogram.png;A bar chart showing the distribution of the target variable Loan_Status.;['Balancing this dataset would be mandatory to improve the results.'] -loan_data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -loan_data_histograms_numeric.png;A set of histograms of the variables ['ApplicantIncome', 'CoapplicantIncome', 'LoanAmount', 'Loan_Amount_Term'].;['All variables, but the class, should be dealt with as numeric.', 'The variable CoapplicantIncome can be seen as ordinal.', 'The variable Loan_Amount_Term can be seen as ordinal without losing information.', 'Variable CoapplicantIncome is balanced.', 'It is clear that variable ApplicantIncome shows some outliers, but we can’t be sure of the same for variable Loan_Amount_Term.', 'Outliers seem to be a problem in the dataset.', 'Variable Loan_Amount_Term shows some outlier values.', 'Variable ApplicantIncome doesn’t have any outliers.', 'Variable Loan_Amount_Term presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Loan_Amount_Term and ApplicantIncome variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for ApplicantIncome variable, dummification would be the most adequate encoding.', 'The variable CoapplicantIncome can be coded as ordinal without losing information.', 'Feature generation based on variable CoapplicantIncome seems to be promising.', 'Feature generation based on the use of variable ApplicantIncome wouldn’t be useful, but the use of CoapplicantIncome seems to be promising.', 'Given the usual semantics of Loan_Amount_Term variable, dummification would have been a better codification.', 'It is better to drop the variable CoapplicantIncome than removing all records with missing values.', 'Not knowing the semantics of Loan_Amount_Term variable, dummification could have been a more adequate codification.'] -Dry_Bean_Dataset_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Area <= 39172.5 and the second with the condition AspectRation <= 1.86.;['It is clear that variable ShapeFactor1 is one of the five most relevant features.', 'The variable Extent seems to be one of the three most relevant features.', 'The variable EquivDiameter discriminates between the target values, as shown in the decision tree.', 'It is possible to state that ShapeFactor1 is the second most discriminative variable regarding the class.', 'Variable AspectRation is one of the most relevant variables.', 'Variable Perimeter seems to be relevant for the majority of mining tasks.', 'Variables Solidity and EquivDiameter seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The precision for the presented tree is lower than 90%.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 2501.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], it is possible to state that KNN algorithm classifies (not A, not B) as SEKER for any k ≤ 4982.', 'Considering that A=True<=>[Area <= 39172.5] and B=True<=>[AspectRation <= 1.86], the Decision Tree presented classifies (A,B) as HOROZ.'] -Dry_Bean_Dataset_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -Dry_Bean_Dataset_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Dry_Bean_Dataset_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -Dry_Bean_Dataset_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Dry_Bean_Dataset_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Dry_Bean_Dataset_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 5 and 25%.'] -Dry_Bean_Dataset_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Extent or Area can be discarded without losing information.', 'The variable Solidity can be discarded without risking losing information.', 'Variables roundness and Perimeter are redundant, but we can’t say the same for the pair MinorAxisLength and Eccentricity.', 'Variables MinorAxisLength and Eccentricity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Extent seems to be relevant for the majority of mining tasks.', 'Variables ShapeFactor1 and Area seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable EquivDiameter might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Eccentricity previously than variable ShapeFactor1.'] -Dry_Bean_Dataset_boxplots.png;A set of boxplots of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['Variable ShapeFactor1 is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Area shows some outliers, but we can’t be sure of the same for variable Perimeter.', 'Outliers seem to be a problem in the dataset.', 'Variable AspectRation shows a high number of outlier values.', 'Variable Extent doesn’t have any outliers.', 'Variable Solidity presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Dry_Bean_Dataset_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] -Dry_Bean_Dataset_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Dry_Bean_Dataset_histograms_numeric.png;A set of histograms of the variables ['Area', 'Perimeter', 'MinorAxisLength', 'AspectRation', 'Eccentricity', 'EquivDiameter', 'Extent', 'Solidity', 'roundness', 'ShapeFactor1'].;['All variables, but the class, should be dealt with as date.', 'The variable Solidity can be seen as ordinal.', 'The variable Area can be seen as ordinal without losing information.', 'Variable Solidity is balanced.', 'It is clear that variable MinorAxisLength shows some outliers, but we can’t be sure of the same for variable Solidity.', 'Outliers seem to be a problem in the dataset.', 'Variable MinorAxisLength shows some outlier values.', 'Variable MinorAxisLength doesn’t have any outliers.', 'Variable Perimeter presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for MinorAxisLength and Area variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Area variable, dummification would be the most adequate encoding.', 'The variable ShapeFactor1 can be coded as ordinal without losing information.', 'Feature generation based on variable AspectRation seems to be promising.', 'Feature generation based on the use of variable Eccentricity wouldn’t be useful, but the use of Area seems to be promising.', 'Given the usual semantics of Solidity variable, dummification would have been a better codification.', 'It is better to drop the variable Area than removing all records with missing values.', 'Not knowing the semantics of Area variable, dummification could have been a more adequate codification.'] -credit_customers_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition existing_credits <= 1.5 and the second with the condition residence_since <= 3.5.;['It is clear that variable age is one of the five most relevant features.', 'The variable installment_commitment seems to be one of the five most relevant features.', 'The variable credit_amount discriminates between the target values, as shown in the decision tree.', 'It is possible to state that age is the second most discriminative variable regarding the class.', 'Variable duration is one of the most relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables credit_amount and age seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'The recall for the presented tree is higher than its accuracy.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 264.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as good for any k ≤ 183.', 'Considering that A=True<=>[existing_credits <= 1.5] and B=True<=>[residence_since <= 3.5], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 146.'] -credit_customers_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -credit_customers_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -credit_customers_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -credit_customers_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -credit_customers_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -credit_customers_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -credit_customers_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 5 and 25%.'] -credit_customers_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables age or existing_credits can be discarded without losing information.', 'The variable existing_credits can be discarded without risking losing information.', 'Variables residence_since and installment_commitment are redundant, but we can’t say the same for the pair credit_amount and age.', 'Variables existing_credits and residence_since are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable installment_commitment seems to be relevant for the majority of mining tasks.', 'Variables installment_commitment and residence_since seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable existing_credits might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable installment_commitment previously than variable duration.'] -credit_customers_boxplots.png;A set of boxplots of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['Variable existing_credits is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable duration shows some outliers, but we can’t be sure of the same for variable credit_amount.', 'Outliers seem to be a problem in the dataset.', 'Variable installment_commitment shows some outlier values.', 'Variable residence_since doesn’t have any outliers.', 'Variable residence_since presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -credit_customers_histograms_symbolic.png;A set of bar charts of the variables ['checking_status', 'employment', 'other_parties', 'other_payment_plans', 'housing', 'num_dependents', 'own_telephone', 'foreign_worker'].;['All variables, but the class, should be dealt with as numeric.', 'The variable other_payment_plans can be seen as ordinal.', 'The variable num_dependents can be seen as ordinal without losing information.', 'Considering the common semantics for housing and checking_status variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for num_dependents variable, dummification would be the most adequate encoding.', 'The variable foreign_worker can be coded as ordinal without losing information.', 'Feature generation based on variable foreign_worker seems to be promising.', 'Feature generation based on the use of variable employment wouldn’t be useful, but the use of checking_status seems to be promising.', 'Given the usual semantics of foreign_worker variable, dummification would have been a better codification.', 'It is better to drop the variable employment than removing all records with missing values.', 'Not knowing the semantics of checking_status variable, dummification could have been a more adequate codification.'] -credit_customers_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] -credit_customers_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -credit_customers_histograms_numeric.png;A set of histograms of the variables ['duration', 'credit_amount', 'installment_commitment', 'residence_since', 'age', 'existing_credits'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable duration can be seen as ordinal.', 'The variable duration can be seen as ordinal without losing information.', 'Variable residence_since is balanced.', 'It is clear that variable installment_commitment shows some outliers, but we can’t be sure of the same for variable residence_since.', 'Outliers seem to be a problem in the dataset.', 'Variable duration shows some outlier values.', 'Variable age doesn’t have any outliers.', 'Variable residence_since presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for installment_commitment and duration variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for credit_amount variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable credit_amount seems to be promising.', 'Feature generation based on the use of variable duration wouldn’t be useful, but the use of credit_amount seems to be promising.', 'Given the usual semantics of credit_amount variable, dummification would have been a better codification.', 'It is better to drop the variable installment_commitment than removing all records with missing values.', 'Not knowing the semantics of existing_credits variable, dummification could have been a more adequate codification.'] -weatherAUS_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rainfall <= 0.1 and the second with the condition Pressure3pm <= 1009.65.;['It is clear that variable Cloud3pm is one of the three most relevant features.', 'The variable Temp3pm seems to be one of the two most relevant features.', 'The variable WindSpeed9am discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Cloud9am is the second most discriminative variable regarding the class.', 'Variable Pressure3pm is one of the most relevant variables.', 'Variable Cloud3pm seems to be relevant for the majority of mining tasks.', 'Variables Cloud9am and WindSpeed9am seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 75%.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The precision for the presented tree is lower than its recall.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that KNN algorithm classifies (A, not B) as No for any k ≤ 1686.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], it is possible to state that KNN algorithm classifies (A, not B) as Yes for any k ≤ 1154.', 'Considering that A=True<=>[Rainfall <= 0.1] and B=True<=>[Pressure3pm <= 1009.65], the Decision Tree presented classifies (not A, B) as Yes.'] -weatherAUS_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -weatherAUS_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -weatherAUS_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -weatherAUS_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -weatherAUS_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 4 nodes of depth.'] -weatherAUS_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -weatherAUS_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 5 principal components would imply an error between 15 and 25%.'] -weatherAUS_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables Cloud3pm or Pressure9am can be discarded without losing information.', 'The variable Pressure3pm can be discarded without risking losing information.', 'Variables Cloud9am and Temp3pm are redundant, but we can’t say the same for the pair Rainfall and Pressure3pm.', 'Variables Rainfall and Cloud3pm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable WindSpeed9am seems to be relevant for the majority of mining tasks.', 'Variables Cloud3pm and Rainfall seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Rainfall might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Temp3pm previously than variable Rainfall.'] -weatherAUS_boxplots.png;A set of boxplots of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['Variable Pressure9am is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cloud9am shows some outliers, but we can’t be sure of the same for variable Cloud3pm.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure3pm shows a high number of outlier values.', 'Variable Temp3pm doesn’t have any outliers.', 'Variable Pressure3pm presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -weatherAUS_histograms_symbolic.png;A set of bar charts of the variables ['Location', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'RainToday'].;['All variables, but the class, should be dealt with as binary.', 'The variable RainToday can be seen as ordinal.', 'The variable WindDir3pm can be seen as ordinal without losing information.', 'Considering the common semantics for WindDir3pm and Location variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindDir9am variable, dummification would be the most adequate encoding.', 'The variable RainToday can be coded as ordinal without losing information.', 'Feature generation based on variable Location seems to be promising.', 'Feature generation based on the use of variable WindGustDir wouldn’t be useful, but the use of Location seems to be promising.', 'Given the usual semantics of WindDir9am variable, dummification would have been a better codification.', 'It is better to drop the variable WindDir9am than removing all records with missing values.', 'Not knowing the semantics of WindDir9am variable, dummification could have been a more adequate codification.'] -weatherAUS_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Rainfall', 'WindGustDir', 'WindDir9am', 'WindDir3pm', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm', 'RainToday'].;['Discarding variable RainToday would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 40% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable RainToday seems to be promising.', 'It is better to drop the variable Pressure9am than removing all records with missing values.'] -weatherAUS_class_histogram.png;A bar chart showing the distribution of the target variable RainTomorrow.;['Balancing this dataset would be mandatory to improve the results.'] -weatherAUS_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -weatherAUS_histograms_numeric.png;A set of histograms of the variables ['Rainfall', 'WindSpeed9am', 'Pressure9am', 'Pressure3pm', 'Cloud9am', 'Cloud3pm', 'Temp3pm'].;['All variables, but the class, should be dealt with as binary.', 'The variable Pressure3pm can be seen as ordinal.', 'The variable Pressure3pm can be seen as ordinal without losing information.', 'Variable WindSpeed9am is balanced.', 'It is clear that variable Rainfall shows some outliers, but we can’t be sure of the same for variable Pressure3pm.', 'Outliers seem to be a problem in the dataset.', 'Variable Pressure9am shows a high number of outlier values.', 'Variable Rainfall doesn’t have any outliers.', 'Variable Cloud9am presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Rainfall and WindSpeed9am variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for WindSpeed9am variable, dummification would be the most adequate encoding.', 'The variable Pressure3pm can be coded as ordinal without losing information.', 'Feature generation based on variable Rainfall seems to be promising.', 'Feature generation based on the use of variable Pressure3pm wouldn’t be useful, but the use of Rainfall seems to be promising.', 'Given the usual semantics of Temp3pm variable, dummification would have been a better codification.', 'It is better to drop the variable Pressure9am than removing all records with missing values.', 'Not knowing the semantics of Pressure3pm variable, dummification could have been a more adequate codification.'] -car_insurance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition displacement <= 1196.5 and the second with the condition height <= 1519.0.;['It is clear that variable length is one of the three most relevant features.', 'The variable age_of_car seems to be one of the three most relevant features.', 'The variable displacement discriminates between the target values, as shown in the decision tree.', 'It is possible to state that width is the first most discriminative variable regarding the class.', 'Variable gross_weight is one of the most relevant variables.', 'Variable airbags seems to be relevant for the majority of mining tasks.', 'Variables length and age_of_car seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 90%.', 'The number of False Negatives reported in the same tree is 10.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The specificity for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 3813.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that KNN algorithm classifies (not A, not B) as 1 for any k ≤ 3813.', 'Considering that A=True<=>[displacement <= 1196.5] and B=True<=>[height <= 1519.0], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as 1.'] -car_insurance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -car_insurance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -car_insurance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -car_insurance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -car_insurance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -car_insurance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -car_insurance_pca.png;A bar chart showing the explained variance ratio of 9 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 7 principal components would imply an error between 5 and 25%.'] -car_insurance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables age_of_car or width can be discarded without losing information.', 'The variable age_of_policyholder can be discarded without risking losing information.', 'Variables gross_weight and length are redundant, but we can’t say the same for the pair policy_tenure and displacement.', 'Variables policy_tenure and age_of_car are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age_of_car seems to be relevant for the majority of mining tasks.', 'Variables policy_tenure and age_of_car seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable width might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable age_of_policyholder previously than variable height.'] -car_insurance_boxplots.png;A set of boxplots of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['Variable width is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age_of_policyholder shows some outliers, but we can’t be sure of the same for variable height.', 'Outliers seem to be a problem in the dataset.', 'Variable policy_tenure shows some outlier values.', 'Variable displacement doesn’t have any outliers.', 'Variable policy_tenure presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -car_insurance_histograms_symbolic.png;A set of bar charts of the variables ['area_cluster', 'segment', 'model', 'fuel_type', 'max_torque', 'max_power', 'steering_type', 'is_esc', 'is_adjustable_steering'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable is_esc can be seen as ordinal.', 'The variable fuel_type can be seen as ordinal without losing information.', 'Considering the common semantics for max_power and area_cluster variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for max_torque variable, dummification would be the most adequate encoding.', 'The variable model can be coded as ordinal without losing information.', 'Feature generation based on variable steering_type seems to be promising.', 'Feature generation based on the use of variable fuel_type wouldn’t be useful, but the use of area_cluster seems to be promising.', 'Given the usual semantics of model variable, dummification would have been a better codification.', 'It is better to drop the variable steering_type than removing all records with missing values.', 'Not knowing the semantics of max_power variable, dummification could have been a more adequate codification.'] -car_insurance_class_histogram.png;A bar chart showing the distribution of the target variable is_claim.;['Balancing this dataset would be mandatory to improve the results.'] -car_insurance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -car_insurance_histograms_numeric.png;A set of histograms of the variables ['policy_tenure', 'age_of_car', 'age_of_policyholder', 'airbags', 'displacement', 'length', 'width', 'height', 'gross_weight'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable age_of_policyholder can be seen as ordinal.', 'The variable width can be seen as ordinal without losing information.', 'Variable age_of_policyholder is balanced.', 'It is clear that variable policy_tenure shows some outliers, but we can’t be sure of the same for variable height.', 'Outliers seem to be a problem in the dataset.', 'Variable age_of_car shows some outlier values.', 'Variable airbags doesn’t have any outliers.', 'Variable gross_weight presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for width and policy_tenure variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for gross_weight variable, dummification would be the most adequate encoding.', 'The variable length can be coded as ordinal without losing information.', 'Feature generation based on variable width seems to be promising.', 'Feature generation based on the use of variable gross_weight wouldn’t be useful, but the use of policy_tenure seems to be promising.', 'Given the usual semantics of width variable, dummification would have been a better codification.', 'It is better to drop the variable height than removing all records with missing values.', 'Not knowing the semantics of policy_tenure variable, dummification could have been a more adequate codification.'] -heart_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition slope <= 1.5 and the second with the condition restecg <= 0.5.;['It is clear that variable thal is one of the four most relevant features.', 'The variable thal seems to be one of the four most relevant features.', 'The variable trestbps discriminates between the target values, as shown in the decision tree.', 'It is possible to state that thal is the second most discriminative variable regarding the class.', 'Variable oldpeak is one of the most relevant variables.', 'Variable restecg seems to be relevant for the majority of mining tasks.', 'Variables slope and chol seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The precision for the presented tree is lower than 60%.', 'The number of True Negatives reported in the same tree is 50.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The recall for the presented tree is higher than its specificity.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 202.', 'Considering that A=True<=>[slope <= 1.5] and B=True<=>[restecg <= 0.5], the Decision Tree presented classifies (not A, B) as 1.'] -heart_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -heart_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -heart_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -heart_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -heart_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 8.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -heart_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -heart_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] -heart_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables restecg or thalach can be discarded without losing information.', 'The variable trestbps can be discarded without risking losing information.', 'Variables thalach and slope are redundant.', 'Variables restecg and thal are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable thalach seems to be relevant for the majority of mining tasks.', 'Variables slope and age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable trestbps might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable cp previously than variable ca.'] -heart_boxplots.png;A set of boxplots of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['Variable ca is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ca shows some outliers, but we can’t be sure of the same for variable restecg.', 'Outliers seem to be a problem in the dataset.', 'Variable restecg shows a high number of outlier values.', 'Variable thal doesn’t have any outliers.', 'Variable ca presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -heart_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'fbs', 'exang'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable fbs can be seen as ordinal.', 'The variable exang can be seen as ordinal without losing information.', 'Considering the common semantics for fbs and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for exang variable, dummification would be the most adequate encoding.', 'The variable fbs can be coded as ordinal without losing information.', 'Feature generation based on variable exang seems to be promising.', 'Feature generation based on the use of variable sex wouldn’t be useful, but the use of fbs seems to be promising.', 'Given the usual semantics of sex variable, dummification would have been a better codification.', 'It is better to drop the variable sex than removing all records with missing values.', 'Not knowing the semantics of sex variable, dummification could have been a more adequate codification.'] -heart_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] -heart_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -heart_histograms_numeric.png;A set of histograms of the variables ['age', 'cp', 'trestbps', 'chol', 'restecg', 'thalach', 'oldpeak', 'slope', 'ca', 'thal'].;['All variables, but the class, should be dealt with as date.', 'The variable cp can be seen as ordinal.', 'The variable thalach can be seen as ordinal without losing information.', 'Variable thalach is balanced.', 'It is clear that variable oldpeak shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable oldpeak shows some outlier values.', 'Variable chol doesn’t have any outliers.', 'Variable thalach presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and cp variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for age variable, dummification would be the most adequate encoding.', 'The variable trestbps can be coded as ordinal without losing information.', 'Feature generation based on variable restecg seems to be promising.', 'Feature generation based on the use of variable trestbps wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of chol variable, dummification would have been a better codification.', 'It is better to drop the variable oldpeak than removing all records with missing values.', 'Not knowing the semantics of thal variable, dummification could have been a more adequate codification.'] -Breast_Cancer_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition perimeter_mean <= 90.47 and the second with the condition texture_worst <= 27.89.;['It is clear that variable smoothness_se is one of the five most relevant features.', 'The variable radius_worst seems to be one of the two most relevant features.', 'The variable radius_worst discriminates between the target values, as shown in the decision tree.', 'It is possible to state that texture_worst is the second most discriminative variable regarding the class.', 'Variable perimeter_worst is one of the most relevant variables.', 'Variable texture_worst seems to be relevant for the majority of mining tasks.', 'Variables area_se and perimeter_worst seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 20.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], it is possible to state that KNN algorithm classifies (A,B) as B for any k ≤ 20.', 'Considering that A=True<=>[perimeter_mean <= 90.47] and B=True<=>[texture_worst <= 27.89], the Decision Tree presented classifies (not A, B) as M.'] -Breast_Cancer_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Breast_Cancer_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -Breast_Cancer_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Breast_Cancer_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -Breast_Cancer_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] -Breast_Cancer_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Breast_Cancer_pca.png;A bar chart showing the explained variance ratio of 10 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 9 principal components would imply an error between 10 and 20%.'] -Breast_Cancer_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['The intrinsic dimensionality of this dataset is 6.', 'One of the variables perimeter_mean or symmetry_se can be discarded without losing information.', 'The variable perimeter_worst can be discarded without risking losing information.', 'Variables radius_worst and symmetry_se are redundant, but we can’t say the same for the pair perimeter_worst and perimeter_se.', 'Variables texture_mean and texture_worst are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable perimeter_mean seems to be relevant for the majority of mining tasks.', 'Variables perimeter_worst and perimeter_se seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable texture_se might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable radius_worst previously than variable area_se.'] -Breast_Cancer_boxplots.png;A set of boxplots of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['Variable radius_worst is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable radius_worst shows some outliers, but we can’t be sure of the same for variable perimeter_mean.', 'Outliers seem to be a problem in the dataset.', 'Variable texture_mean shows a high number of outlier values.', 'Variable symmetry_se doesn’t have any outliers.', 'Variable perimeter_se presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Breast_Cancer_class_histogram.png;A bar chart showing the distribution of the target variable diagnosis.;['Balancing this dataset would be mandatory to improve the results.'] -Breast_Cancer_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Breast_Cancer_histograms_numeric.png;A set of histograms of the variables ['texture_mean', 'perimeter_mean', 'texture_se', 'perimeter_se', 'area_se', 'smoothness_se', 'symmetry_se', 'radius_worst', 'texture_worst', 'perimeter_worst'].;['All variables, but the class, should be dealt with as date.', 'The variable perimeter_worst can be seen as ordinal.', 'The variable texture_se can be seen as ordinal without losing information.', 'Variable perimeter_se is balanced.', 'It is clear that variable texture_worst shows some outliers, but we can’t be sure of the same for variable symmetry_se.', 'Outliers seem to be a problem in the dataset.', 'Variable perimeter_se shows a high number of outlier values.', 'Variable perimeter_worst doesn’t have any outliers.', 'Variable texture_worst presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for radius_worst and texture_mean variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for area_se variable, dummification would be the most adequate encoding.', 'The variable perimeter_se can be coded as ordinal without losing information.', 'Feature generation based on variable radius_worst seems to be promising.', 'Feature generation based on the use of variable texture_worst wouldn’t be useful, but the use of texture_mean seems to be promising.', 'Given the usual semantics of perimeter_worst variable, dummification would have been a better codification.', 'It is better to drop the variable perimeter_se than removing all records with missing values.', 'Not knowing the semantics of perimeter_mean variable, dummification could have been a more adequate codification.'] -e-commerce_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Prior_purchases <= 3.5 and the second with the condition Customer_care_calls <= 4.5.;['It is clear that variable Customer_care_calls is one of the two most relevant features.', 'The variable Customer_rating seems to be one of the three most relevant features.', 'The variable Prior_purchases discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Discount_offered is the first most discriminative variable regarding the class.', 'Variable Discount_offered is one of the most relevant variables.', 'Variable Discount_offered seems to be relevant for the majority of mining tasks.', 'Variables Cost_of_the_Product and Customer_rating seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives is higher than the number of False Positives for the presented tree.', 'The number of True Negatives is lower than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as No.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 906.', 'Considering that A=True<=>[Prior_purchases <= 3.5] and B=True<=>[Customer_care_calls <= 4.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as No.'] -e-commerce_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -e-commerce_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -e-commerce_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -e-commerce_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] -e-commerce_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -e-commerce_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -e-commerce_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 25%.'] -e-commerce_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Prior_purchases or Cost_of_the_Product can be discarded without losing information.', 'The variable Weight_in_gms can be discarded without risking losing information.', 'Variables Customer_care_calls and Prior_purchases are redundant, but we can’t say the same for the pair Cost_of_the_Product and Customer_rating.', 'Variables Customer_rating and Cost_of_the_Product are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Customer_rating seems to be relevant for the majority of mining tasks.', 'Variables Cost_of_the_Product and Prior_purchases seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Customer_care_calls might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Customer_care_calls previously than variable Weight_in_gms.'] -e-commerce_boxplots.png;A set of boxplots of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['Variable Customer_rating is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Cost_of_the_Product shows some outliers, but we can’t be sure of the same for variable Customer_rating.', 'Outliers seem to be a problem in the dataset.', 'Variable Weight_in_gms shows some outlier values.', 'Variable Customer_care_calls doesn’t have any outliers.', 'Variable Weight_in_gms presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -e-commerce_histograms_symbolic.png;A set of bar charts of the variables ['Warehouse_block', 'Mode_of_Shipment', 'Product_importance', 'Gender'].;['All variables, but the class, should be dealt with as binary.', 'The variable Gender can be seen as ordinal.', 'The variable Warehouse_block can be seen as ordinal without losing information.', 'Considering the common semantics for Mode_of_Shipment and Warehouse_block variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Gender variable, dummification would be the most adequate encoding.', 'The variable Product_importance can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable Mode_of_Shipment wouldn’t be useful, but the use of Warehouse_block seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable Product_importance than removing all records with missing values.', 'Not knowing the semantics of Mode_of_Shipment variable, dummification could have been a more adequate codification.'] -e-commerce_class_histogram.png;A bar chart showing the distribution of the target variable ReachedOnTime.;['Balancing this dataset would be mandatory to improve the results.'] -e-commerce_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -e-commerce_histograms_numeric.png;A set of histograms of the variables ['Customer_care_calls', 'Customer_rating', 'Cost_of_the_Product', 'Prior_purchases', 'Discount_offered', 'Weight_in_gms'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Weight_in_gms can be seen as ordinal.', 'The variable Prior_purchases can be seen as ordinal without losing information.', 'Variable Prior_purchases is balanced.', 'It is clear that variable Prior_purchases shows some outliers, but we can’t be sure of the same for variable Customer_rating.', 'Outliers seem to be a problem in the dataset.', 'Variable Discount_offered shows some outlier values.', 'Variable Weight_in_gms doesn’t have any outliers.', 'Variable Customer_rating presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Customer_care_calls and Customer_rating variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Discount_offered variable, dummification would be the most adequate encoding.', 'The variable Prior_purchases can be coded as ordinal without losing information.', 'Feature generation based on variable Weight_in_gms seems to be promising.', 'Feature generation based on the use of variable Discount_offered wouldn’t be useful, but the use of Customer_care_calls seems to be promising.', 'Given the usual semantics of Discount_offered variable, dummification would have been a better codification.', 'It is better to drop the variable Customer_rating than removing all records with missing values.', 'Not knowing the semantics of Discount_offered variable, dummification could have been a more adequate codification.'] -maintenance_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Rotational speed [rpm] <= 1381.5 and the second with the condition Torque [Nm] <= 65.05.;['It is clear that variable Torque [Nm] is one of the two most relevant features.', 'The variable Air temperature [K] seems to be one of the five most relevant features.', 'The variable Tool wear [min] discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Process temperature [K] is the first most discriminative variable regarding the class.', 'Variable Tool wear [min] is one of the most relevant variables.', 'Variable Process temperature [K] seems to be relevant for the majority of mining tasks.', 'Variables Tool wear [min] and Rotational speed [rpm] seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of False Positives reported in the same tree is 50.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Negatives is lower than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], the Decision Tree presented classifies (A, not B) as 0.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 943.', 'Considering that A=True<=>[Rotational speed [rpm] <= 1381.5] and B=True<=>[Torque [Nm] <= 65.05], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 5990.'] -maintenance_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -maintenance_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -maintenance_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -maintenance_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] -maintenance_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 7.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] -maintenance_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -maintenance_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 30%.'] -maintenance_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables Rotational speed [rpm] or Torque [Nm] can be discarded without losing information.', 'The variable Process temperature [K] can be discarded without risking losing information.', 'Variables Air temperature [K] and Tool wear [min] are redundant.', 'Variables Rotational speed [rpm] and Torque [Nm] are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Rotational speed [rpm] seems to be relevant for the majority of mining tasks.', 'Variables Rotational speed [rpm] and Process temperature [K] seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Process temperature [K] might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Rotational speed [rpm] previously than variable Tool wear [min].'] -maintenance_boxplots.png;A set of boxplots of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['Variable Torque [Nm] is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Process temperature [K].', 'Outliers seem to be a problem in the dataset.', 'Variable Process temperature [K] shows some outlier values.', 'Variable Torque [Nm] doesn’t have any outliers.', 'Variable Process temperature [K] presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -maintenance_histograms_symbolic.png;A set of bar charts of the variables ['Type', 'TWF', 'HDF', 'PWF', 'OSF', 'RNF'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable PWF can be seen as ordinal.', 'The variable Type can be seen as ordinal without losing information.', 'Considering the common semantics for Type and TWF variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Type variable, dummification would be the most adequate encoding.', 'The variable Type can be coded as ordinal without losing information.', 'Feature generation based on variable TWF seems to be promising.', 'Feature generation based on the use of variable OSF wouldn’t be useful, but the use of Type seems to be promising.', 'Given the usual semantics of RNF variable, dummification would have been a better codification.', 'It is better to drop the variable TWF than removing all records with missing values.', 'Not knowing the semantics of PWF variable, dummification could have been a more adequate codification.'] -maintenance_class_histogram.png;A bar chart showing the distribution of the target variable Machine_failure.;['Balancing this dataset would be mandatory to improve the results.'] -maintenance_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -maintenance_histograms_numeric.png;A set of histograms of the variables ['Air temperature [K]', 'Process temperature [K]', 'Rotational speed [rpm]', 'Torque [Nm]', 'Tool wear [min]'].;['All variables, but the class, should be dealt with as date.', 'The variable Process temperature [K] can be seen as ordinal.', 'The variable Air temperature [K] can be seen as ordinal without losing information.', 'Variable Air temperature [K] is balanced.', 'It is clear that variable Rotational speed [rpm] shows some outliers, but we can’t be sure of the same for variable Torque [Nm].', 'Outliers seem to be a problem in the dataset.', 'Variable Tool wear [min] shows some outlier values.', 'Variable Rotational speed [rpm] doesn’t have any outliers.', 'Variable Tool wear [min] presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Tool wear [min] and Air temperature [K] variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Air temperature [K] variable, dummification would be the most adequate encoding.', 'The variable Tool wear [min] can be coded as ordinal without losing information.', 'Feature generation based on variable Rotational speed [rpm] seems to be promising.', 'Feature generation based on the use of variable Air temperature [K] wouldn’t be useful, but the use of Process temperature [K] seems to be promising.', 'Given the usual semantics of Rotational speed [rpm] variable, dummification would have been a better codification.', 'It is better to drop the variable Rotational speed [rpm] than removing all records with missing values.', 'Not knowing the semantics of Rotational speed [rpm] variable, dummification could have been a more adequate codification.'] -Churn_Modelling_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Age <= 42.5 and the second with the condition NumOfProducts <= 2.5.;['It is clear that variable Tenure is one of the two most relevant features.', 'The variable EstimatedSalary seems to be one of the three most relevant features.', 'The variable Balance discriminates between the target values, as shown in the decision tree.', 'It is possible to state that NumOfProducts is the first most discriminative variable regarding the class.', 'Variable Tenure is one of the most relevant variables.', 'Variable CreditScore seems to be relevant for the majority of mining tasks.', 'Variables CreditScore and EstimatedSalary seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Negatives reported in the same tree is 10.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The variable CreditScore seems to be one of the two most relevant features.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 0.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (not A, not B) as 1 for any k ≤ 114.', 'Considering that A=True<=>[Age <= 42.5] and B=True<=>[NumOfProducts <= 2.5], it is possible to state that KNN algorithm classifies (not A, not B) as 0 for any k ≤ 1931.'] -Churn_Modelling_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -Churn_Modelling_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Churn_Modelling_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Churn_Modelling_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Churn_Modelling_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 4.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Churn_Modelling_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Churn_Modelling_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 4 principal components would imply an error between 15 and 30%.'] -Churn_Modelling_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Balance or EstimatedSalary can be discarded without losing information.', 'The variable Tenure can be discarded without risking losing information.', 'Variables EstimatedSalary and Age are redundant, but we can’t say the same for the pair Balance and NumOfProducts.', 'Variables Age and NumOfProducts are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Tenure seems to be relevant for the majority of mining tasks.', 'Variables EstimatedSalary and Age seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Age might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Balance previously than variable Tenure.'] -Churn_Modelling_boxplots.png;A set of boxplots of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['Variable Balance is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Balance shows some outliers, but we can’t be sure of the same for variable EstimatedSalary.', 'Outliers seem to be a problem in the dataset.', 'Variable EstimatedSalary shows a high number of outlier values.', 'Variable Tenure doesn’t have any outliers.', 'Variable Balance presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Churn_Modelling_histograms_symbolic.png;A set of bar charts of the variables ['Geography', 'Gender', 'HasCrCard', 'IsActiveMember'].;['All variables, but the class, should be dealt with as date.', 'The variable IsActiveMember can be seen as ordinal.', 'The variable IsActiveMember can be seen as ordinal without losing information.', 'Considering the common semantics for Geography and Gender variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for IsActiveMember variable, dummification would be the most adequate encoding.', 'The variable Geography can be coded as ordinal without losing information.', 'Feature generation based on variable Gender seems to be promising.', 'Feature generation based on the use of variable HasCrCard wouldn’t be useful, but the use of Geography seems to be promising.', 'Given the usual semantics of HasCrCard variable, dummification would have been a better codification.', 'It is better to drop the variable IsActiveMember than removing all records with missing values.', 'Not knowing the semantics of Geography variable, dummification could have been a more adequate codification.'] -Churn_Modelling_class_histogram.png;A bar chart showing the distribution of the target variable Exited.;['Balancing this dataset would be mandatory to improve the results.'] -Churn_Modelling_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Churn_Modelling_histograms_numeric.png;A set of histograms of the variables ['CreditScore', 'Age', 'Tenure', 'Balance', 'NumOfProducts', 'EstimatedSalary'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Tenure can be seen as ordinal.', 'The variable EstimatedSalary can be seen as ordinal without losing information.', 'Variable EstimatedSalary is balanced.', 'It is clear that variable Balance shows some outliers, but we can’t be sure of the same for variable CreditScore.', 'Outliers seem to be a problem in the dataset.', 'Variable NumOfProducts shows some outlier values.', 'Variable Balance doesn’t have any outliers.', 'Variable Age presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Tenure and CreditScore variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Balance variable, dummification would be the most adequate encoding.', 'The variable NumOfProducts can be coded as ordinal without losing information.', 'Feature generation based on variable Balance seems to be promising.', 'Feature generation based on the use of variable Balance wouldn’t be useful, but the use of CreditScore seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable EstimatedSalary than removing all records with missing values.', 'Not knowing the semantics of Tenure variable, dummification could have been a more adequate codification.'] -vehicle_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition MAJORSKEWNESS <= 74.5 and the second with the condition CIRCULARITY <= 49.5.;['It is clear that variable MINORVARIANCE is one of the three most relevant features.', 'The variable MINORKURTOSIS seems to be one of the four most relevant features.', 'The variable DISTANCE CIRCULARITY discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CIRCULARITY is the second most discriminative variable regarding the class.', 'Variable DISTANCE CIRCULARITY is one of the most relevant variables.', 'Variable GYRATIONRADIUS seems to be relevant for the majority of mining tasks.', 'Variables MAJORSKEWNESS and GYRATIONRADIUS seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 60%.', 'The number of False Positives is lower than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The variable MAJORVARIANCE seems to be one of the four most relevant features.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], the Decision Tree presented classifies (A,B) as 3.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], the Decision Tree presented classifies (A, not B) as 4.', 'Considering that A=True<=>[MAJORSKEWNESS <= 74.5] and B=True<=>[CIRCULARITY <= 49.5], it is possible to state that KNN algorithm classifies (not A, B) as 2 for any k ≤ 1.'] -vehicle_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -vehicle_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -vehicle_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -vehicle_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -vehicle_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 7.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -vehicle_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 5 and 20%.'] -vehicle_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables RADIUS RATIO or DISTANCE CIRCULARITY can be discarded without losing information.', 'The variable MINORKURTOSIS can be discarded without risking losing information.', 'Variables MINORVARIANCE and MAJORKURTOSIS are redundant, but we can’t say the same for the pair MAJORVARIANCE and CIRCULARITY.', 'Variables GYRATIONRADIUS and DISTANCE CIRCULARITY are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable RADIUS RATIO seems to be relevant for the majority of mining tasks.', 'Variables DISTANCE CIRCULARITY and MINORKURTOSIS seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable RADIUS RATIO might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable CIRCULARITY previously than variable COMPACTNESS.'] -vehicle_boxplots.png;A set of boxplots of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['Variable MAJORSKEWNESS is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable MAJORSKEWNESS shows some outliers, but we can’t be sure of the same for variable COMPACTNESS.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORVARIANCE shows some outlier values.', 'Variable COMPACTNESS doesn’t have any outliers.', 'Variable COMPACTNESS presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -vehicle_class_histogram.png;A bar chart showing the distribution of the target variable target.;['Balancing this dataset would be mandatory to improve the results.'] -vehicle_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -vehicle_histograms_numeric.png;A set of histograms of the variables ['COMPACTNESS', 'CIRCULARITY', 'DISTANCE CIRCULARITY', 'RADIUS RATIO', 'MAJORVARIANCE', 'MINORVARIANCE', 'GYRATIONRADIUS', 'MAJORSKEWNESS', 'MINORSKEWNESS', 'MINORKURTOSIS', 'MAJORKURTOSIS'].;['All variables, but the class, should be dealt with as date.', 'The variable MAJORVARIANCE can be seen as ordinal.', 'The variable MINORKURTOSIS can be seen as ordinal without losing information.', 'Variable COMPACTNESS is balanced.', 'It is clear that variable COMPACTNESS shows some outliers, but we can’t be sure of the same for variable MINORSKEWNESS.', 'Outliers seem to be a problem in the dataset.', 'Variable MINORSKEWNESS shows some outlier values.', 'Variable MINORSKEWNESS doesn’t have any outliers.', 'Variable CIRCULARITY presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for GYRATIONRADIUS and COMPACTNESS variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for COMPACTNESS variable, dummification would be the most adequate encoding.', 'The variable MINORVARIANCE can be coded as ordinal without losing information.', 'Feature generation based on variable MAJORSKEWNESS seems to be promising.', 'Feature generation based on the use of variable MINORVARIANCE wouldn’t be useful, but the use of COMPACTNESS seems to be promising.', 'Given the usual semantics of RADIUS RATIO variable, dummification would have been a better codification.', 'It is better to drop the variable MINORVARIANCE than removing all records with missing values.', 'Not knowing the semantics of MAJORSKEWNESS variable, dummification could have been a more adequate codification.'] -adult_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition hours-per-week <= 41.5 and the second with the condition capital-loss <= 1820.5.;['It is clear that variable fnlwgt is one of the five most relevant features.', 'The variable capital-gain seems to be one of the four most relevant features.', 'The variable capital-loss discriminates between the target values, as shown in the decision tree.', 'It is possible to state that fnlwgt is the first most discriminative variable regarding the class.', 'Variable fnlwgt is one of the most relevant variables.', 'Variable fnlwgt seems to be relevant for the majority of mining tasks.', 'Variables capital-gain and educational-num seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of False Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of True Positives is higher than the number of True Negatives for the presented tree.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (A, not B) as <=50K for any k ≤ 21974.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], it is possible to state that KNN algorithm classifies (not A, B) as >50K for any k ≤ 541.', 'Considering that A=True<=>[hours-per-week <= 41.5] and B=True<=>[capital-loss <= 1820.5], the Decision Tree presented classifies (A, not B) as >50K.'] -adult_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -adult_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -adult_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -adult_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -adult_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -adult_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -adult_pca.png;A bar chart showing the explained variance ratio of 6 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 30%.'] -adult_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables fnlwgt or educational-num can be discarded without losing information.', 'The variable educational-num can be discarded without risking losing information.', 'Variables capital-loss and capital-gain are redundant, but we can’t say the same for the pair hours-per-week and educational-num.', 'Variables capital-gain and fnlwgt are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables capital-gain and hours-per-week seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable hours-per-week might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable capital-gain previously than variable hours-per-week.'] -adult_boxplots.png;A set of boxplots of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['Variable capital-gain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable age shows some outliers, but we can’t be sure of the same for variable capital-gain.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows some outlier values.', 'Variable hours-per-week doesn’t have any outliers.', 'Variable educational-num presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -adult_histograms_symbolic.png;A set of bar charts of the variables ['workclass', 'education', 'marital-status', 'occupation', 'relationship', 'race', 'gender'].;['All variables, but the class, should be dealt with as numeric.', 'The variable relationship can be seen as ordinal.', 'The variable relationship can be seen as ordinal without losing information.', 'Considering the common semantics for workclass and education variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for gender variable, dummification would be the most adequate encoding.', 'The variable marital-status can be coded as ordinal without losing information.', 'Feature generation based on variable education seems to be promising.', 'Feature generation based on the use of variable race wouldn’t be useful, but the use of workclass seems to be promising.', 'Given the usual semantics of education variable, dummification would have been a better codification.', 'It is better to drop the variable workclass than removing all records with missing values.', 'Not knowing the semantics of education variable, dummification could have been a more adequate codification.'] -adult_class_histogram.png;A bar chart showing the distribution of the target variable income.;['Balancing this dataset would be mandatory to improve the results.'] -adult_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -adult_histograms_numeric.png;A set of histograms of the variables ['age', 'fnlwgt', 'educational-num', 'capital-gain', 'capital-loss', 'hours-per-week'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable capital-gain can be seen as ordinal.', 'The variable age can be seen as ordinal without losing information.', 'Variable hours-per-week is balanced.', 'It is clear that variable capital-loss shows some outliers, but we can’t be sure of the same for variable capital-gain.', 'Outliers seem to be a problem in the dataset.', 'Variable capital-loss shows some outlier values.', 'Variable fnlwgt doesn’t have any outliers.', 'Variable educational-num presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for capital-loss and age variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for educational-num variable, dummification would be the most adequate encoding.', 'The variable age can be coded as ordinal without losing information.', 'Feature generation based on variable capital-loss seems to be promising.', 'Feature generation based on the use of variable hours-per-week wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of capital-gain variable, dummification would have been a better codification.', 'It is better to drop the variable educational-num than removing all records with missing values.', 'Not knowing the semantics of hours-per-week variable, dummification could have been a more adequate codification.'] -Covid_Data_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition CARDIOVASCULAR <= 50.0 and the second with the condition ASHTMA <= 1.5.;['It is clear that variable MEDICAL_UNIT is one of the five most relevant features.', 'The variable ASTHMA seems to be one of the three most relevant features.', 'The variable ASTHMA discriminates between the target values, as shown in the decision tree.', 'It is possible to state that CARDIOVASCULAR is the first most discriminative variable regarding the class.', 'Variable PREGNANT is one of the most relevant variables.', 'Variable MEDICAL_UNIT seems to be relevant for the majority of mining tasks.', 'Variables AGE and PNEUMONIA seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 75%.', 'The number of True Positives is lower than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The accuracy for the presented tree is lower than 75%.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (not A, B) as No for any k ≤ 16.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as No for any k ≤ 7971.', 'Considering that A=True<=>[CARDIOVASCULAR <= 50.0] and B=True<=>[ASHTMA <= 1.5], it is possible to state that KNN algorithm classifies (A,B) as Yes for any k ≤ 46.'] -Covid_Data_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] -Covid_Data_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] -Covid_Data_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] -Covid_Data_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with less than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -Covid_Data_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -Covid_Data_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -Covid_Data_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 10 principal components would imply an error between 15 and 25%.'] -Covid_Data_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables RENAL_CHRONIC or OTHER_DISEASE can be discarded without losing information.', 'The variable MEDICAL_UNIT can be discarded without risking losing information.', 'Variables TOBACCO and PREGNANT are redundant, but we can’t say the same for the pair HIPERTENSION and RENAL_CHRONIC.', 'Variables PREGNANT and HIPERTENSION are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable TOBACCO seems to be relevant for the majority of mining tasks.', 'Variables AGE and ICU seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PREGNANT might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ICU previously than variable MEDICAL_UNIT.'] -Covid_Data_boxplots.png;A set of boxplots of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['Variable OTHER_DISEASE is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable HIPERTENSION shows some outliers, but we can’t be sure of the same for variable COPD.', 'Outliers seem to be a problem in the dataset.', 'Variable HIPERTENSION shows a high number of outlier values.', 'Variable MEDICAL_UNIT doesn’t have any outliers.', 'Variable AGE presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Covid_Data_histograms_symbolic.png;A set of bar charts of the variables ['USMER', 'SEX', 'PATIENT_TYPE'].;['All variables, but the class, should be dealt with as numeric.', 'The variable PATIENT_TYPE can be seen as ordinal.', 'The variable PATIENT_TYPE can be seen as ordinal without losing information.', 'Considering the common semantics for PATIENT_TYPE and USMER variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for USMER variable, dummification would be the most adequate encoding.', 'The variable PATIENT_TYPE can be coded as ordinal without losing information.', 'Feature generation based on variable USMER seems to be promising.', 'Feature generation based on the use of variable SEX wouldn’t be useful, but the use of USMER seems to be promising.', 'Given the usual semantics of SEX variable, dummification would have been a better codification.', 'It is better to drop the variable PATIENT_TYPE than removing all records with missing values.', 'Not knowing the semantics of USMER variable, dummification could have been a more adequate codification.'] -Covid_Data_class_histogram.png;A bar chart showing the distribution of the target variable CLASSIFICATION.;['Balancing this dataset would be mandatory to improve the results.'] -Covid_Data_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are symbolic, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Covid_Data_histograms_numeric.png;A set of histograms of the variables ['MEDICAL_UNIT', 'PNEUMONIA', 'AGE', 'PREGNANT', 'COPD', 'ASTHMA', 'HIPERTENSION', 'OTHER_DISEASE', 'CARDIOVASCULAR', 'RENAL_CHRONIC', 'TOBACCO', 'ICU'].;['All variables, but the class, should be dealt with as binary.', 'The variable ICU can be seen as ordinal.', 'The variable ICU can be seen as ordinal without losing information.', 'Variable PNEUMONIA is balanced.', 'It is clear that variable PNEUMONIA shows some outliers, but we can’t be sure of the same for variable HIPERTENSION.', 'Outliers seem to be a problem in the dataset.', 'Variable COPD shows some outlier values.', 'Variable COPD doesn’t have any outliers.', 'Variable TOBACCO presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for TOBACCO and MEDICAL_UNIT variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PNEUMONIA variable, dummification would be the most adequate encoding.', 'The variable OTHER_DISEASE can be coded as ordinal without losing information.', 'Feature generation based on variable COPD seems to be promising.', 'Feature generation based on the use of variable TOBACCO wouldn’t be useful, but the use of MEDICAL_UNIT seems to be promising.', 'Given the usual semantics of CARDIOVASCULAR variable, dummification would have been a better codification.', 'It is better to drop the variable ASTHMA than removing all records with missing values.', 'Not knowing the semantics of OTHER_DISEASE variable, dummification could have been a more adequate codification.'] -sky_survey_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition dec <= 22.21 and the second with the condition mjd <= 55090.5.;['It is clear that variable run is one of the two most relevant features.', 'The variable run seems to be one of the five most relevant features.', 'The variable run discriminates between the target values, as shown in the decision tree.', 'It is possible to state that dec is the first most discriminative variable regarding the class.', 'Variable redshift is one of the most relevant variables.', 'Variable field seems to be relevant for the majority of mining tasks.', 'Variables run and mjd seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is higher than 60%.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of False Negatives is higher than the number of False Positives for the presented tree.', 'The number of False Positives reported in the same tree is 10.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as QSO.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A, not B) as QSO for any k ≤ 208.', 'Considering that A=True<=>[dec <= 22.21] and B=True<=>[mjd <= 55090.5], it is possible to state that KNN algorithm classifies (A, not B) as GALAXY for any k ≤ 1728.'] -sky_survey_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -sky_survey_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] -sky_survey_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -sky_survey_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with less than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] -sky_survey_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] -sky_survey_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 10 and 30%.'] -sky_survey_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables plate or ra can be discarded without losing information.', 'The variable ra can be discarded without risking losing information.', 'Variables run and dec are redundant, but we can’t say the same for the pair plate and field.', 'Variables field and plate are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable plate seems to be relevant for the majority of mining tasks.', 'Variables mjd and dec seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable run might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable camcol previously than variable dec.'] -sky_survey_boxplots.png;A set of boxplots of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['Variable redshift is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable ra shows some outliers, but we can’t be sure of the same for variable dec.', 'Outliers seem to be a problem in the dataset.', 'Variable redshift shows a high number of outlier values.', 'Variable mjd doesn’t have any outliers.', 'Variable field presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -sky_survey_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] -sky_survey_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -sky_survey_histograms_numeric.png;A set of histograms of the variables ['ra', 'dec', 'run', 'camcol', 'field', 'redshift', 'plate', 'mjd'].;['All variables, but the class, should be dealt with as binary.', 'The variable dec can be seen as ordinal.', 'The variable run can be seen as ordinal without losing information.', 'Variable dec is balanced.', 'It is clear that variable field shows some outliers, but we can’t be sure of the same for variable camcol.', 'Outliers seem to be a problem in the dataset.', 'Variable plate shows some outlier values.', 'Variable field doesn’t have any outliers.', 'Variable redshift presents some outliers.', 'At least 85 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for redshift and ra variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for plate variable, dummification would be the most adequate encoding.', 'The variable ra can be coded as ordinal without losing information.', 'Feature generation based on variable plate seems to be promising.', 'Feature generation based on the use of variable run wouldn’t be useful, but the use of ra seems to be promising.', 'Given the usual semantics of plate variable, dummification would have been a better codification.', 'It is better to drop the variable mjd than removing all records with missing values.', 'Not knowing the semantics of camcol variable, dummification could have been a more adequate codification.'] -Wine_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Total phenols <= 2.36 and the second with the condition Proanthocyanins <= 1.58.;['It is clear that variable Alcohol is one of the three most relevant features.', 'The variable OD280-OD315 of diluted wines seems to be one of the four most relevant features.', 'The variable Total phenols discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Hue is the second most discriminative variable regarding the class.', 'Variable Alcalinity of ash is one of the most relevant variables.', 'Variable Proanthocyanins seems to be relevant for the majority of mining tasks.', 'Variables Flavanoids and OD280-OD315 of diluted wines seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 10%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 90%.', 'The number of True Positives is lower than the number of True Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The accuracy for the presented tree is lower than its recall.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (not A, B) as 3 for any k ≤ 2.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 2.', 'Considering that A=True<=>[Total phenols <= 2.36] and B=True<=>[Proanthocyanins <= 1.58], it is possible to state that KNN algorithm classifies (A, not B) as 2 for any k ≤ 49.'] -Wine_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -Wine_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -Wine_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] -Wine_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] -Wine_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 6.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] -Wine_pca.png;A bar chart showing the explained variance ratio of 11 principal components.;['The first 7 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 5 and 25%.'] -Wine_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Alcalinity of ash or Flavanoids can be discarded without losing information.', 'The variable Alcohol can be discarded without risking losing information.', 'Variables Ash and Flavanoids seem to be useful for classification tasks.', 'Variables Proanthocyanins and Nonflavanoid phenols are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Hue seems to be relevant for the majority of mining tasks.', 'Variables Color intensity and OD280-OD315 of diluted wines seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Malic acid might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Nonflavanoid phenols previously than variable Alcohol.'] -Wine_boxplots.png;A set of boxplots of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['Variable Flavanoids is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Flavanoids shows some outliers, but we can’t be sure of the same for variable Nonflavanoid phenols.', 'Outliers seem to be a problem in the dataset.', 'Variable Alcalinity of ash shows some outlier values.', 'Variable Alcohol doesn’t have any outliers.', 'Variable Proanthocyanins presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -Wine_class_histogram.png;A bar chart showing the distribution of the target variable Class.;['Balancing this dataset would be mandatory to improve the results.'] -Wine_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -Wine_histograms_numeric.png;A set of histograms of the variables ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Total phenols', 'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue', 'OD280-OD315 of diluted wines'].;['All variables, but the class, should be dealt with as date.', 'The variable Hue can be seen as ordinal.', 'The variable Color intensity can be seen as ordinal without losing information.', 'Variable Nonflavanoid phenols is balanced.', 'It is clear that variable Nonflavanoid phenols shows some outliers, but we can’t be sure of the same for variable Ash.', 'Outliers seem to be a problem in the dataset.', 'Variable Ash shows a high number of outlier values.', 'Variable Hue doesn’t have any outliers.', 'Variable Proanthocyanins presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Hue and Alcohol variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Color intensity variable, dummification would be the most adequate encoding.', 'The variable Nonflavanoid phenols can be coded as ordinal without losing information.', 'Feature generation based on variable Alcalinity of ash seems to be promising.', 'Feature generation based on the use of variable Ash wouldn’t be useful, but the use of Alcohol seems to be promising.', 'Given the usual semantics of Proanthocyanins variable, dummification would have been a better codification.', 'It is better to drop the variable Alcalinity of ash than removing all records with missing values.', 'Not knowing the semantics of Flavanoids variable, dummification could have been a more adequate codification.'] -water_potability_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Hardness <= 278.29 and the second with the condition Chloramines <= 6.7.;['It is clear that variable Turbidity is one of the three most relevant features.', 'The variable Sulfate seems to be one of the three most relevant features.', 'The variable ph discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Conductivity is the second most discriminative variable regarding the class.', 'Variable Chloramines is one of the most relevant variables.', 'Variable Trihalomethanes seems to be relevant for the majority of mining tasks.', 'Variables Turbidity and Chloramines seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Positives reported in the same tree is 50.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 8.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], the Decision Tree presented classifies (A,B) as 0.', 'Considering that A=True<=>[Hardness <= 278.29] and B=True<=>[Chloramines <= 6.7], it is possible to state that KNN algorithm classifies (A,B) as 1 for any k ≤ 6.'] -water_potability_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] -water_potability_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -water_potability_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -water_potability_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -water_potability_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 12 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 9.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -water_potability_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] -water_potability_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 6 principal components would imply an error between 5 and 25%.'] -water_potability_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['The intrinsic dimensionality of this dataset is 5.', 'One of the variables Sulfate or ph can be discarded without losing information.', 'The variable Turbidity can be discarded without risking losing information.', 'Variables Chloramines and Trihalomethanes are redundant, but we can’t say the same for the pair Conductivity and ph.', 'Variables Hardness and Turbidity are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Turbidity seems to be relevant for the majority of mining tasks.', 'Variables Trihalomethanes and ph seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Turbidity might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Chloramines previously than variable Conductivity.'] -water_potability_boxplots.png;A set of boxplots of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['Variable Turbidity is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Hardness shows some outliers, but we can’t be sure of the same for variable Chloramines.', 'Outliers seem to be a problem in the dataset.', 'Variable Hardness shows some outlier values.', 'Variable Chloramines doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -water_potability_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['ph', 'Sulfate', 'Trihalomethanes'].;['Discarding variable Sulfate would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable ph seems to be promising.', 'It is better to drop the variable ph than removing all records with missing values.'] -water_potability_class_histogram.png;A bar chart showing the distribution of the target variable Potability.;['Balancing this dataset would be mandatory to improve the results.'] -water_potability_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -water_potability_histograms_numeric.png;A set of histograms of the variables ['ph', 'Hardness', 'Chloramines', 'Sulfate', 'Conductivity', 'Trihalomethanes', 'Turbidity'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Hardness can be seen as ordinal.', 'The variable ph can be seen as ordinal without losing information.', 'Variable Turbidity is balanced.', 'It is clear that variable Trihalomethanes shows some outliers, but we can’t be sure of the same for variable ph.', 'Outliers seem to be a problem in the dataset.', 'Variable Turbidity shows some outlier values.', 'Variable Conductivity doesn’t have any outliers.', 'Variable Sulfate presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Conductivity and ph variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sulfate variable, dummification would be the most adequate encoding.', 'The variable Hardness can be coded as ordinal without losing information.', 'Feature generation based on variable Hardness seems to be promising.', 'Feature generation based on the use of variable ph wouldn’t be useful, but the use of Hardness seems to be promising.', 'Given the usual semantics of Sulfate variable, dummification would have been a better codification.', 'It is better to drop the variable Trihalomethanes than removing all records with missing values.', 'Not knowing the semantics of Sulfate variable, dummification could have been a more adequate codification.'] -abalone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Height <= 0.13 and the second with the condition Diameter <= 0.45.;['It is clear that variable Whole weight is one of the four most relevant features.', 'The variable Rings seems to be one of the four most relevant features.', 'The variable Rings discriminates between the target values, as shown in the decision tree.', 'It is possible to state that Viscera weight is the first most discriminative variable regarding the class.', 'Variable Viscera weight is one of the most relevant variables.', 'Variable Shell weight seems to be relevant for the majority of mining tasks.', 'Variables Shucked weight and Length seem to be useful for classification tasks.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The precision for the presented tree is higher than 90%.', 'The number of False Positives is lower than the number of True Negatives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'The number of True Negatives is lower than the number of True Positives for the presented tree.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A, not B) as F for any k ≤ 1191.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A,B) as I for any k ≤ 1191.', 'Considering that A=True<=>[Height <= 0.13] and B=True<=>[Diameter <= 0.45], it is possible to state that KNN algorithm classifies (A,B) as F for any k ≤ 117.'] -abalone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 700 episodes.'] -abalone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] -abalone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] -abalone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 17.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 6 neighbors.'] -abalone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 3.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] -abalone_pca.png;A bar chart showing the explained variance ratio of 8 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 30%.'] -abalone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables Rings or Shucked weight can be discarded without losing information.', 'The variable Height can be discarded without risking losing information.', 'Variables Shucked weight and Whole weight are redundant, but we can’t say the same for the pair Diameter and Rings.', 'Variables Viscera weight and Diameter are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Diameter seems to be relevant for the majority of mining tasks.', 'Variables Shell weight and Length seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Length might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Diameter previously than variable Length.'] -abalone_boxplots.png;A set of boxplots of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['Variable Height is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Shucked weight shows some outliers, but we can’t be sure of the same for variable Shell weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Rings shows a high number of outlier values.', 'Variable Viscera weight doesn’t have any outliers.', 'Variable Length presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] -abalone_class_histogram.png;A bar chart showing the distribution of the target variable Sex.;['Balancing this dataset would be mandatory to improve the results.'] -abalone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] -abalone_histograms_numeric.png;A set of histograms of the variables ['Length', 'Diameter', 'Height', 'Whole weight', 'Shucked weight', 'Viscera weight', 'Shell weight', 'Rings'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Diameter can be seen as ordinal.', 'The variable Whole weight can be seen as ordinal without losing information.', 'Variable Rings is balanced.', 'It is clear that variable Height shows some outliers, but we can’t be sure of the same for variable Shell weight.', 'Outliers seem to be a problem in the dataset.', 'Variable Viscera weight shows some outlier values.', 'Variable Shucked weight doesn’t have any outliers.', 'Variable Viscera weight presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Rings and Length variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Whole weight variable, dummification would be the most adequate encoding.', 'The variable Height can be coded as ordinal without losing information.', 'Feature generation based on variable Whole weight seems to be promising.', 'Feature generation based on the use of variable Diameter wouldn’t be useful, but the use of Length seems to be promising.', 'Given the usual semantics of Rings variable, dummification would have been a better codification.', 'It is better to drop the variable Diameter than removing all records with missing values.', 'Not knowing the semantics of Shell weight variable, dummification could have been a more adequate codification.']