|
Chart;description;Questions |
|
smoking_drinking_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition SMK_stat_type_cd <= 1.5 and the second with the condition gamma_GTP <= 35.5.;['The variable SMK_stat_type_cd discriminates between the target values, as shown in the decision tree.', 'Variable gamma_GTP is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is higher than 90%.', 'The number of False Negatives reported in the same tree is 10.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that KNN algorithm classifies (A,B) as N for any k ≤ 3135.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], it is possible to state that Naive Bayes algorithm classifies (not A, B), as N.', 'Considering that A=True<=>[SMK_stat_type_cd <= 1.5] and B=True<=>[gamma_GTP <= 35.5], the Decision Tree presented classifies (A, not B) as Y.'] |
|
smoking_drinking_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] |
|
smoking_drinking_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] |
|
smoking_drinking_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] |
|
smoking_drinking_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] |
|
smoking_drinking_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 3.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] |
|
smoking_drinking_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] |
|
smoking_drinking_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 6 principal components are enough for explaining half the data variance.', 'Using the first 8 principal components would imply an error between 15 and 30%.'] |
|
smoking_drinking_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['The intrinsic dimensionality of this dataset is 7.', 'One of the variables waistline or age can be discarded without losing information.', 'The variable hemoglobin can be discarded without risking losing information.', 'Variables BLDS and weight are redundant, but we can’t say the same for the pair waistline and LDL_chole.', 'Variables age and height are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable age seems to be relevant for the majority of mining tasks.', 'Variables height and waistline seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable waistline might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable LDL_chole previously than variable SMK_stat_type_cd.'] |
|
smoking_drinking_boxplots.png;A set of boxplots of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['Variable waistline is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable tot_chole shows some outliers, but we can’t be sure of the same for variable waistline.', 'Outliers seem to be a problem in the dataset.', 'Variable tot_chole shows a high number of outlier values.', 'Variable SMK_stat_type_cd doesn’t have any outliers.', 'Variable BLDS presents some outliers.', 'At least 85 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
smoking_drinking_histograms_symbolic.png;A set of bar charts of the variables ['sex', 'hear_left', 'hear_right'].;['All variables, but the class, should be dealt with as binary.', 'The variable hear_right can be seen as ordinal.', 'The variable hear_right can be seen as ordinal without losing information.', 'Considering the common semantics for hear_left and sex variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for hear_right variable, dummification would be the most adequate encoding.', 'The variable sex can be coded as ordinal without losing information.', 'Feature generation based on variable hear_left seems to be promising.', 'Feature generation based on the use of variable hear_right wouldn’t be useful, but the use of sex seems to be promising.', 'Given the usual semantics of hear_right variable, dummification would have been a better codification.', 'It is better to drop the variable hear_left than removing all records with missing values.', 'Not knowing the semantics of hear_right variable, dummification could have been a more adequate codification.'] |
|
smoking_drinking_class_histogram.png;A bar chart showing the distribution of the target variable DRK_YN.;['Balancing this dataset would be mandatory to improve the results.'] |
|
smoking_drinking_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
smoking_drinking_histograms_numeric.png;A set of histograms of the variables ['age', 'height', 'weight', 'waistline', 'SBP', 'BLDS', 'tot_chole', 'LDL_chole', 'triglyceride', 'hemoglobin', 'gamma_GTP', 'SMK_stat_type_cd'].;['All variables, but the class, should be dealt with as date.', 'The variable SBP can be seen as ordinal.', 'The variable tot_chole can be seen as ordinal without losing information.', 'Variable weight is balanced.', 'It is clear that variable height shows some outliers, but we can’t be sure of the same for variable age.', 'Outliers seem to be a problem in the dataset.', 'Variable LDL_chole shows some outlier values.', 'Variable tot_chole doesn’t have any outliers.', 'Variable gamma_GTP presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for age and height variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for weight variable, dummification would be the most adequate encoding.', 'The variable hemoglobin can be coded as ordinal without losing information.', 'Feature generation based on variable waistline seems to be promising.', 'Feature generation based on the use of variable height wouldn’t be useful, but the use of age seems to be promising.', 'Given the usual semantics of BLDS variable, dummification would have been a better codification.', 'It is better to drop the variable SMK_stat_type_cd than removing all records with missing values.', 'Not knowing the semantics of hemoglobin variable, dummification could have been a more adequate codification.'] |
|
BankNoteAuthentication_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition skewness <= 5.16 and the second with the condition curtosis <= 0.19.;['The variable curtosis discriminates between the target values, as shown in the decision tree.', 'Variable skewness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of False Positives reported in the same tree is 10.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The recall for the presented tree is lower than its accuracy.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A, not B) as 0 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 214.', 'Considering that A=True<=>[skewness <= 5.16] and B=True<=>[curtosis <= 0.19], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 131.'] |
|
BankNoteAuthentication_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] |
|
BankNoteAuthentication_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] |
|
BankNoteAuthentication_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] |
|
BankNoteAuthentication_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 5.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] |
|
BankNoteAuthentication_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 5 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 10.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] |
|
BankNoteAuthentication_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] |
|
BankNoteAuthentication_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 30%.'] |
|
BankNoteAuthentication_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['variance', 'skewness', 'curtosis', 'entropy'].;['The intrinsic dimensionality of this dataset is 3.', 'One of the variables variance or curtosis can be discarded without losing information.', 'The variable skewness can be discarded without risking losing information.', 'Variables entropy and curtosis are redundant, but we can’t say the same for the pair variance and skewness.', 'Variables curtosis and entropy are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable skewness seems to be relevant for the majority of mining tasks.', 'Variables curtosis and variance seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable variance might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable variance previously than variable skewness.'] |
|
BankNoteAuthentication_boxplots.png;A set of boxplots of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['Variable curtosis is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable curtosis shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable skewness shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable variance presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
BankNoteAuthentication_class_histogram.png;A bar chart showing the distribution of the target variable class.;['Balancing this dataset would be mandatory to improve the results.'] |
|
BankNoteAuthentication_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
BankNoteAuthentication_histograms_numeric.png;A set of histograms of the variables ['variance', 'skewness', 'curtosis', 'entropy'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable skewness can be seen as ordinal.', 'The variable skewness can be seen as ordinal without losing information.', 'Variable variance is balanced.', 'It is clear that variable variance shows some outliers, but we can’t be sure of the same for variable entropy.', 'Outliers seem to be a problem in the dataset.', 'Variable variance shows a high number of outlier values.', 'Variable skewness doesn’t have any outliers.', 'Variable curtosis presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for variance and skewness variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for curtosis variable, dummification would be the most adequate encoding.', 'The variable entropy can be coded as ordinal without losing information.', 'Feature generation based on variable skewness seems to be promising.', 'Feature generation based on the use of variable curtosis wouldn’t be useful, but the use of variance seems to be promising.', 'Given the usual semantics of curtosis variable, dummification would have been a better codification.', 'It is better to drop the variable skewness than removing all records with missing values.', 'Not knowing the semantics of entropy variable, dummification could have been a more adequate codification.'] |
|
Iris_decision_tree.png;;['The variable PetalWidthCm discriminates between the target values, as shown in the decision tree.', 'Variable PetalWidthCm is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 75%.', 'The number of True Negatives reported in the same tree is 30.', 'The number of True Positives is lower than the number of False Negatives for the presented tree.', 'The precision for the presented tree is lower than 90%.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], the Decision Tree presented classifies (not A, not B) as Iris-versicolor.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-virginica for any k ≤ 38.', 'Considering that A=True<=>[PetalWidthCm <= 0.7] and B=True<=>[PetalWidthCm <= 1.75], it is possible to state that KNN algorithm classifies (A,B) as Iris-setosa for any k ≤ 32.'] |
|
Iris_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] |
|
Iris_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1002 estimators.'] |
|
Iris_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] |
|
Iris_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 4 neighbors.'] |
|
Iris_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 8.', 'The decision tree is in overfitting for depths above 10.', 'We are able to identify the existence of overfitting for decision tree models with more than 3 nodes of depth.'] |
|
Iris_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 2 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 10 and 25%.'] |
|
Iris_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PetalWidthCm or SepalLengthCm can be discarded without losing information.', 'The variable PetalLengthCm can be discarded without risking losing information.', 'Variables SepalLengthCm and SepalWidthCm are redundant, but we can’t say the same for the pair PetalLengthCm and PetalWidthCm.', 'Variables PetalLengthCm and SepalLengthCm are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable SepalLengthCm seems to be relevant for the majority of mining tasks.', 'Variables PetalLengthCm and SepalLengthCm seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PetalWidthCm might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable PetalLengthCm previously than variable SepalWidthCm.'] |
|
Iris_boxplots.png;A set of boxplots of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['Variable PetalWidthCm is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable SepalWidthCm shows some outliers, but we can’t be sure of the same for variable PetalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable PetalLengthCm shows some outlier values.', 'Variable SepalLengthCm doesn’t have any outliers.', 'Variable PetalWidthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
Iris_class_histogram.png;A bar chart showing the distribution of the target variable Species.;['Balancing this dataset would be mandatory to improve the results.'] |
|
Iris_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are numeric, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
Iris_histograms_numeric.png;A set of histograms of the variables ['SepalLengthCm', 'SepalWidthCm', 'PetalLengthCm', 'PetalWidthCm'].;['All variables, but the class, should be dealt with as date.', 'The variable PetalWidthCm can be seen as ordinal.', 'The variable SepalLengthCm can be seen as ordinal without losing information.', 'Variable PetalWidthCm is balanced.', 'It is clear that variable PetalWidthCm shows some outliers, but we can’t be sure of the same for variable SepalLengthCm.', 'Outliers seem to be a problem in the dataset.', 'Variable SepalWidthCm shows a high number of outlier values.', 'Variable SepalWidthCm doesn’t have any outliers.', 'Variable PetalLengthCm presents some outliers.', 'At least 75 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for PetalWidthCm and SepalLengthCm variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PetalLengthCm variable, dummification would be the most adequate encoding.', 'The variable PetalLengthCm can be coded as ordinal without losing information.', 'Feature generation based on variable SepalWidthCm seems to be promising.', 'Feature generation based on the use of variable PetalLengthCm wouldn’t be useful, but the use of SepalLengthCm seems to be promising.', 'Given the usual semantics of PetalLengthCm variable, dummification would have been a better codification.', 'It is better to drop the variable SepalWidthCm than removing all records with missing values.', 'Not knowing the semantics of SepalWidthCm variable, dummification could have been a more adequate codification.'] |
|
phone_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition int_memory <= 30.5 and the second with the condition mobile_wt <= 91.5.;['The variable mobile_wt discriminates between the target values, as shown in the decision tree.', 'Variable mobile_wt is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 6%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The specificity for the presented tree is lower than 60%.', 'The number of False Positives is lower than the number of False Negatives for the presented tree.', 'The number of True Negatives is higher than the number of True Positives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], it is possible to state that KNN algorithm classifies (A,B) as 2 for any k ≤ 636.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[int_memory <= 30.5] and B=True<=>[mobile_wt <= 91.5], the Decision Tree presented classifies (A, not B) as 0.'] |
|
phone_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 300 episodes.'] |
|
phone_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] |
|
phone_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] |
|
phone_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 5 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 5 neighbors.'] |
|
phone_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 5 nodes of depth.'] |
|
phone_pca.png;A bar chart showing the explained variance ratio of 12 principal components.;['The first 8 principal components are enough for explaining half the data variance.', 'Using the first 11 principal components would imply an error between 10 and 25%.'] |
|
phone_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['The intrinsic dimensionality of this dataset is 11.', 'One of the variables px_height or battery_power can be discarded without losing information.', 'The variable battery_power can be discarded without risking losing information.', 'Variables ram and px_width are redundant, but we can’t say the same for the pair mobile_wt and sc_h.', 'Variables px_height and sc_w are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable n_cores seems to be relevant for the majority of mining tasks.', 'Variables sc_h and fc seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable sc_h might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable px_height previously than variable px_width.'] |
|
phone_boxplots.png;A set of boxplots of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['Variable n_cores is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable talk_time shows some outliers, but we can’t be sure of the same for variable px_width.', 'Outliers seem to be a problem in the dataset.', 'Variable px_height shows some outlier values.', 'Variable sc_w doesn’t have any outliers.', 'Variable pc presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
phone_histograms_symbolic.png;A set of bar charts of the variables ['blue', 'dual_sim', 'four_g', 'three_g', 'touch_screen', 'wifi'].;['All variables, but the class, should be dealt with as date.', 'The variable four_g can be seen as ordinal.', 'The variable wifi can be seen as ordinal without losing information.', 'Considering the common semantics for touch_screen and blue variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for three_g variable, dummification would be the most adequate encoding.', 'The variable three_g can be coded as ordinal without losing information.', 'Feature generation based on variable four_g seems to be promising.', 'Feature generation based on the use of variable three_g wouldn’t be useful, but the use of blue seems to be promising.', 'Given the usual semantics of three_g variable, dummification would have been a better codification.', 'It is better to drop the variable three_g than removing all records with missing values.', 'Not knowing the semantics of four_g variable, dummification could have been a more adequate codification.'] |
|
phone_class_histogram.png;A bar chart showing the distribution of the target variable price_range.;['Balancing this dataset would be mandatory to improve the results.'] |
|
phone_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
phone_histograms_numeric.png;A set of histograms of the variables ['battery_power', 'fc', 'int_memory', 'mobile_wt', 'n_cores', 'pc', 'px_height', 'px_width', 'ram', 'sc_h', 'sc_w', 'talk_time'].;['All variables, but the class, should be dealt with as binary.', 'The variable int_memory can be seen as ordinal.', 'The variable fc can be seen as ordinal without losing information.', 'Variable sc_h is balanced.', 'It is clear that variable sc_w shows some outliers, but we can’t be sure of the same for variable sc_h.', 'Outliers seem to be a problem in the dataset.', 'Variable pc shows a high number of outlier values.', 'Variable ram doesn’t have any outliers.', 'Variable fc presents some outliers.', 'At least 60 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for px_height and battery_power variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for px_height variable, dummification would be the most adequate encoding.', 'The variable battery_power can be coded as ordinal without losing information.', 'Feature generation based on variable mobile_wt seems to be promising.', 'Feature generation based on the use of variable sc_h wouldn’t be useful, but the use of battery_power seems to be promising.', 'Given the usual semantics of mobile_wt variable, dummification would have been a better codification.', 'It is better to drop the variable mobile_wt than removing all records with missing values.', 'Not knowing the semantics of talk_time variable, dummification could have been a more adequate codification.'] |
|
Titanic_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Pclass <= 2.5 and the second with the condition Parch <= 0.5.;['The variable Parch discriminates between the target values, as shown in the decision tree.', 'Variable Parch is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 5%.', 'As reported in the tree, the number of False Positive is bigger than the number of False Negatives.', 'The accuracy for the presented tree is lower than 75%.', 'The number of True Negatives is lower than the number of False Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Positives for the presented tree.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that KNN algorithm classifies (not A, B) as 1 for any k ≤ 181.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A,B), as 1.', 'Considering that A=True<=>[Pclass <= 2.5] and B=True<=>[Parch <= 0.5], it is possible to state that Naive Bayes algorithm classifies (A, not B), as 0.'] |
|
Titanic_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] |
|
Titanic_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] |
|
Titanic_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 10, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 502 estimators.'] |
|
Titanic_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 13.', 'KNN with 11 neighbour is in overfitting.', 'KNN with less than 7 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] |
|
Titanic_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 20 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 5.', 'We are able to identify the existence of overfitting for decision tree models with more than 6 nodes of depth.'] |
|
Titanic_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] |
|
Titanic_pca.png;A bar chart showing the explained variance ratio of 5 principal components.;['The first 4 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.'] |
|
Titanic_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['The intrinsic dimensionality of this dataset is 4.', 'One of the variables SibSp or Parch can be discarded without losing information.', 'The variable Parch can be discarded without risking losing information.', 'Variables Fare and Age seem to be useful for classification tasks.', 'Variables Age and Fare are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Pclass seems to be relevant for the majority of mining tasks.', 'Variables Parch and SibSp seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable SibSp might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Fare previously than variable Age.'] |
|
Titanic_boxplots.png;A set of boxplots of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['Variable Age is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Pclass shows some outliers, but we can’t be sure of the same for variable Fare.', 'Outliers seem to be a problem in the dataset.', 'Variable Fare shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
Titanic_histograms_symbolic.png;A set of bar charts of the variables ['Embarked', 'Sex'].;['All variables, but the class, should be dealt with as date.', 'The variable Sex can be seen as ordinal.', 'The variable Sex can be seen as ordinal without losing information.', 'Considering the common semantics for Sex and Embarked variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Embarked variable, dummification would be the most adequate encoding.', 'The variable Embarked can be coded as ordinal without losing information.', 'Feature generation based on variable Sex seems to be promising.', 'Feature generation based on the use of variable Sex wouldn’t be useful, but the use of Embarked seems to be promising.', 'Given the usual semantics of Sex variable, dummification would have been a better codification.', 'It is better to drop the variable Embarked than removing all records with missing values.', 'Not knowing the semantics of Embarked variable, dummification could have been a more adequate codification.'] |
|
Titanic_mv.png;A bar chart showing the number of missing values per variable of the dataset. The variables that have missing values are: ['Age', 'Embarked'].;['Discarding variable Embarked would be better than discarding all the records with missing values for that variable.', 'Dropping all records with missing values would be better than to drop the variables with missing values.', 'Dropping all rows with missing values can lead to a dataset with less than 25% of the original data.', 'There is no reason to believe that discarding records showing missing values is safer than discarding the corresponding variables in this case.', 'Feature generation based on variable Embarked seems to be promising.', 'It is better to drop the variable Age than removing all records with missing values.'] |
|
Titanic_class_histogram.png;A bar chart showing the distribution of the target variable Survived.;['Balancing this dataset would be mandatory to improve the results.'] |
|
Titanic_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
Titanic_histograms_numeric.png;A set of histograms of the variables ['Pclass', 'Age', 'SibSp', 'Parch', 'Fare'].;['All variables, but the class, should be dealt with as symbolic.', 'The variable Parch can be seen as ordinal.', 'The variable Fare can be seen as ordinal without losing information.', 'Variable Pclass is balanced.', 'It is clear that variable Parch shows some outliers, but we can’t be sure of the same for variable SibSp.', 'Outliers seem to be a problem in the dataset.', 'Variable Age shows a high number of outlier values.', 'Variable Fare doesn’t have any outliers.', 'Variable Parch presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Age and Pclass variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for SibSp variable, dummification would be the most adequate encoding.', 'The variable Pclass can be coded as ordinal without losing information.', 'Feature generation based on variable Parch seems to be promising.', 'Feature generation based on the use of variable Age wouldn’t be useful, but the use of Pclass seems to be promising.', 'Given the usual semantics of Age variable, dummification would have been a better codification.', 'It is better to drop the variable Pclass than removing all records with missing values.', 'Not knowing the semantics of SibSp variable, dummification could have been a more adequate codification.'] |
|
apple_quality_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition Juiciness <= -0.3 and the second with the condition Crunchiness <= 2.25.;['The variable Crunchiness discriminates between the target values, as shown in the decision tree.', 'Variable Juiciness is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is higher than 75%.', 'The number of True Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The specificity for the presented tree is higher than 90%.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], the Decision Tree presented classifies (not A, not B) as bad.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that KNN algorithm classifies (not A, not B) as bad for any k ≤ 1625.', 'Considering that A=True<=>[Juiciness <= -0.3] and B=True<=>[Crunchiness <= 2.25], it is possible to state that Naive Bayes algorithm classifies (not A, not B), as good.'] |
|
apple_quality_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] |
|
apple_quality_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 1502 estimators.'] |
|
apple_quality_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 3, may be explained by its estimators being in overfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1502 estimators.'] |
|
apple_quality_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k larger than 17.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 17 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 3 neighbors.'] |
|
apple_quality_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 16 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 5.', 'The decision tree is in overfitting for depths above 6.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] |
|
apple_quality_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] |
|
apple_quality_pca.png;A bar chart showing the explained variance ratio of 7 principal components.;['The first 5 principal components are enough for explaining half the data variance.', 'Using the first 2 principal components would imply an error between 15 and 20%.'] |
|
apple_quality_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables Crunchiness or Acidity can be discarded without losing information.', 'The variable Ripeness can be discarded without risking losing information.', 'Variables Juiciness and Crunchiness are redundant, but we can’t say the same for the pair Sweetness and Ripeness.', 'Variables Juiciness and Crunchiness are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable Juiciness seems to be relevant for the majority of mining tasks.', 'Variables Crunchiness and Weight seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable Juiciness might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable Juiciness previously than variable Ripeness.'] |
|
apple_quality_boxplots.png;A set of boxplots of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['Variable Weight is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable Sweetness shows some outliers, but we can’t be sure of the same for variable Crunchiness.', 'Outliers seem to be a problem in the dataset.', 'Variable Ripeness shows a high number of outlier values.', 'Variable Acidity doesn’t have any outliers.', 'Variable Juiciness presents some outliers.', 'At least 75 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the Naive Bayes performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
apple_quality_class_histogram.png;A bar chart showing the distribution of the target variable Quality.;['Balancing this dataset would be mandatory to improve the results.'] |
|
apple_quality_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are binary, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
apple_quality_histograms_numeric.png;A set of histograms of the variables ['Size', 'Weight', 'Sweetness', 'Crunchiness', 'Juiciness', 'Ripeness', 'Acidity'].;['All variables, but the class, should be dealt with as numeric.', 'The variable Acidity can be seen as ordinal.', 'The variable Size can be seen as ordinal without losing information.', 'Variable Juiciness is balanced.', 'It is clear that variable Weight shows some outliers, but we can’t be sure of the same for variable Sweetness.', 'Outliers seem to be a problem in the dataset.', 'Variable Juiciness shows a high number of outlier values.', 'Variable Size doesn’t have any outliers.', 'Variable Weight presents some outliers.', 'At least 50 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for Crunchiness and Size variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for Sweetness variable, dummification would be the most adequate encoding.', 'The variable Juiciness can be coded as ordinal without losing information.', 'Feature generation based on variable Acidity seems to be promising.', 'Feature generation based on the use of variable Acidity wouldn’t be useful, but the use of Size seems to be promising.', 'Given the usual semantics of Acidity variable, dummification would have been a better codification.', 'It is better to drop the variable Ripeness than removing all records with missing values.', 'Not knowing the semantics of Acidity variable, dummification could have been a more adequate codification.'] |
|
Employee_decision_tree.png;An image showing a decision tree with depth = 2 where the first decision is made with the condition JoiningYear <= 2017.5 and the second with the condition ExperienceInCurrentDomain <= 3.5.;['The variable JoiningYear discriminates between the target values, as shown in the decision tree.', 'Variable ExperienceInCurrentDomain is one of the most relevant variables.', 'A smaller tree would be delivered if we would apply post-pruning, accepting an accuracy reduction of 8%.', 'As reported in the tree, the number of False Positive is smaller than the number of False Negatives.', 'The recall for the presented tree is lower than 60%.', 'The number of False Positives is higher than the number of False Negatives for the presented tree.', 'The number of False Positives is higher than the number of True Positives for the presented tree.', 'The number of True Negatives is higher than the number of False Negatives for the presented tree.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], it is possible to state that KNN algorithm classifies (A,B) as 0 for any k ≤ 44.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (not A, B) as 1.', 'Considering that A=True<=>[JoiningYear <= 2017.5] and B=True<=>[ExperienceInCurrentDomain <= 3.5], the Decision Tree presented classifies (A,B) as 0.'] |
|
Employee_overfitting_mlp.png;A multi-line chart showing the overfitting of a mlp where the y-axis represents the accuracy and the x-axis represents the number of iterations ranging from 100 to 1000.;['We are able to identify the existence of overfitting for MLP models trained longer than 500 episodes.'] |
|
Employee_overfitting_gb.png;A multi-line chart showing the overfitting of gradient boosting where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['We are able to identify the existence of overfitting for gradient boosting models with more than 502 estimators.'] |
|
Employee_overfitting_rf.png;A multi-line chart showing the overfitting of random forest where the y-axis represents the accuracy and the x-axis represents the number of estimators ranging from 2 to 2002.;['Results for Random Forests identified as 2, may be explained by its estimators being in underfitting.', 'The random forests results shown can be explained by the lack of diversity resulting from the number of features considered.', 'We are able to identify the existence of overfitting for random forest models with more than 1002 estimators.'] |
|
Employee_overfitting_knn.png;A multi-line chart showing the overfitting of k-nearest neighbors where the y-axis represents the accuracy and the x-axis represents the number of neighbors ranging from 1 to 23.;['KNN is in overfitting for k less than 13.', 'KNN with 7 neighbour is in overfitting.', 'KNN with more than 15 neighbours is in overfitting.', 'We are able to identify the existence of overfitting for KNN models with less than 2 neighbors.'] |
|
Employee_overfitting_decision_tree.png;A multi-line chart showing the overfitting of a decision tree where the y-axis represents the accuracy and the x-axis represents the max depth ranging from 2 to 25.;['According to the decision tree overfitting chart, the tree with 9 nodes of depth is in overfitting.', 'The chart reporting the recall for different trees shows that the model enters in overfitting for models with depth higher than 9.', 'The decision tree is in overfitting for depths above 4.', 'We are able to identify the existence of overfitting for decision tree models with more than 2 nodes of depth.'] |
|
Employee_overfitting_dt_acc_rec.png;A multi-line chart showing the overfitting of decision tree where the y-axis represents the performance of both accuracy and recall and the x-axis represents the max depth ranging from 2 to 25.;['The difference between recall and accuracy becomes smaller with the depth due to the overfitting phenomenon.'] |
|
Employee_pca.png;A bar chart showing the explained variance ratio of 4 principal components.;['The first 3 principal components are enough for explaining half the data variance.', 'Using the first 3 principal components would imply an error between 15 and 25%.'] |
|
Employee_correlation_heatmap.png;A heatmap showing the correlation between the variables of the dataset. The variables are ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['The intrinsic dimensionality of this dataset is 2.', 'One of the variables PaymentTier or JoiningYear can be discarded without losing information.', 'The variable JoiningYear can be discarded without risking losing information.', 'Variables Age and PaymentTier are redundant, but we can’t say the same for the pair ExperienceInCurrentDomain and JoiningYear.', 'Variables PaymentTier and JoiningYear are redundant.', 'From the correlation analysis alone, it is clear that there are relevant variables.', 'Variable JoiningYear seems to be relevant for the majority of mining tasks.', 'Variables Age and ExperienceInCurrentDomain seem to be useful for classification tasks.', 'Applying a non-supervised feature selection based on the redundancy, would not increase the performance of the generality of the training algorithms in this dataset.', 'Removing variable PaymentTier might improve the training of decision trees .', 'There is evidence in favour for sequential backward selection to select variable ExperienceInCurrentDomain previously than variable PaymentTier.'] |
|
Employee_boxplots.png;A set of boxplots of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['Variable ExperienceInCurrentDomain is balanced.', 'Those boxplots show that the data is not normalized.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows a high number of outlier values.', 'Variable JoiningYear doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 60 of the variables present outliers.', 'The boxplots presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'A scaling transformation is mandatory, in order to improve the KNN performance in this dataset.', 'Multiplying ratio and Boolean variables by 100, and variables with a range between 0 and 10 by 10, would have an impact similar to other scaling transformations.', 'Normalization of this dataset could not have impact on a KNN classifier.', 'Scaling this dataset would be mandatory to improve the results with distance-based methods.'] |
|
Employee_histograms_symbolic.png;A set of bar charts of the variables ['Education', 'City', 'Gender', 'EverBenched'].;['All variables, but the class, should be dealt with as date.', 'The variable Gender can be seen as ordinal.', 'The variable EverBenched can be seen as ordinal without losing information.', 'Considering the common semantics for Education and City variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for City variable, dummification would be the most adequate encoding.', 'The variable City can be coded as ordinal without losing information.', 'Feature generation based on variable City seems to be promising.', 'Feature generation based on the use of variable EverBenched wouldn’t be useful, but the use of Education seems to be promising.', 'Given the usual semantics of Gender variable, dummification would have been a better codification.', 'It is better to drop the variable EverBenched than removing all records with missing values.', 'Not knowing the semantics of Education variable, dummification could have been a more adequate codification.'] |
|
Employee_class_histogram.png;A bar chart showing the distribution of the target variable LeaveOrNot.;['Balancing this dataset would be mandatory to improve the results.'] |
|
Employee_nr_records_nr_variables.png;A bar chart showing the number of records and variables of the dataset.;['Given the number of records and that some variables are date, we might be facing the curse of dimensionality.', 'We face the curse of dimensionality when training a classifier with this dataset.', 'Balancing this dataset by SMOTE would most probably be preferable over undersampling.'] |
|
Employee_histograms_numeric.png;A set of histograms of the variables ['JoiningYear', 'PaymentTier', 'Age', 'ExperienceInCurrentDomain'].;['All variables, but the class, should be dealt with as date.', 'The variable PaymentTier can be seen as ordinal.', 'The variable Age can be seen as ordinal without losing information.', 'Variable Age is balanced.', 'It is clear that variable PaymentTier shows some outliers, but we can’t be sure of the same for variable Age.', 'Outliers seem to be a problem in the dataset.', 'Variable JoiningYear shows some outlier values.', 'Variable ExperienceInCurrentDomain doesn’t have any outliers.', 'Variable PaymentTier presents some outliers.', 'At least 50 of the variables present outliers.', 'The histograms presented show a large number of outliers for most of the numeric variables.', 'The existence of outliers is one of the problems to tackle in this dataset.', 'Considering the common semantics for JoiningYear and PaymentTier variables, dummification if applied would increase the risk of facing the curse of dimensionality.', 'Considering the common semantics for PaymentTier variable, dummification would be the most adequate encoding.', 'The variable PaymentTier can be coded as ordinal without losing information.', 'Feature generation based on variable PaymentTier seems to be promising.', 'Feature generation based on the use of variable ExperienceInCurrentDomain wouldn’t be useful, but the use of JoiningYear seems to be promising.', 'Given the usual semantics of PaymentTier variable, dummification would have been a better codification.', 'It is better to drop the variable JoiningYear than removing all records with missing values.', 'Not knowing the semantics of Age variable, dummification could have been a more adequate codification.'] |
|
|