Classification models attempt to predict a target in a discrete space, that is assign an instance of dependent variables one or more categories. Classification score visualizers display the differences between classes as well as a number of classifier-specific visual evaluations. We currently have implemented the following classifier evaluations:
- Classification Report: A visual classification report that displays precision, recall, and F1 per-class as a heatmap.
- Confusion Matrix: A heatmap view of the confusion matrix of pairs of classes in multi-class classification.
- ROCAUC: Graphs the receiver operating characteristics and area under the curve.
- Precision-Recall Curves: Plots the precision and recall for different probability thresholds.
- Class Balance: Visual inspection of the target to show the support of each class to the final estimator.
- Class Prediction Error: An alternative to the confusion matrix that shows both support and the difference between actual and predicted classes.
- Discrimination Threshold: Shows precision, recall, f1, and queue rate over all thresholds for binary classifiers that use a discrimination probability or score.
Estimator score visualizers wrap scikit-learn estimators and expose the
Estimator API such that they have
methods that call the appropriate estimator methods under the hood. Score
visualizers can wrap an estimator and be passed in as the final step in
# Classifier Evaluation Imports from sklearn.naive_bayes import GaussianNB from sklearn.linear_model import LogisticRegression from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import train_test_split from yellowbrick.target import ClassBalance from yellowbrick.classifier import ROCAUC from yellowbrick.classifier import PrecisionRecallCurve from yellowbrick.classifier import ClassificationReport from yellowbrick.classifier import ClassPredictionError from yellowbrick.classifier import DiscriminationThreshold