PCA Projection

The PCA Decomposition visualizer utilizes principal component analysis to decompose high dimensional data into two or three dimensions so that each instance can be plotted in a scatter plot. The use of PCA means that the projected dataset can be analyzed along axes of principal variation and can be interpreted to determine if spherical distance metrics can be utilized.

# Load the classification data set
data = load_data('credit')

# Specify the features of interest and the target
target = "default"
features = [col for col in data.columns if col != target]

# Extract the instance data and the target
X = data[features]
y = data[target]

# Create a list of colors to assign to points in the plot
colors = np.array(['r' if yi else 'b' for yi in y])
from yellowbrick.features.pca import PCADecomposition

visualizer = PCADecomposition(scale=True, color=colors)
visualizer.fit_transform(X, y)
visualizer.poof()
../../_images/pca_projection_2d.png

The PCA projection can also be plotted in three dimensions to attempt to visualize more principal components and get a better sense of the distribution in high dimensions.

visualizer = PCADecomposition(scale=True, color=colors, proj_dim=3)
visualizer.fit_transform(X, y)
visualizer.poof()
../../_images/pca_projection_3d.png

Biplot

The PCA projection can be enhanced to a biplot whose points are the projected instances and whose vectors represent the structure of the data in high dimensional space. By using the proj_features=True flag, vectors for each feature in the dataset are drawn on the scatter plot in the direction of the maximum variance for that feature. These structures can be used to analyze the importance of a feature to the decomposition or to find features of related variance for further analysis.

# Load the classification data set
data = load_data('concrete')

# Specify the features of interest and the target
target = "strength"
features = [
    'cement', 'slag', 'ash', 'water', 'splast', 'coarse', 'fine', 'age'
]

# Extract the instance data and the target
X = data[features]
y = data[target]
visualizer = PCADecomposition(scale=True, proj_features=True)
visualizer.fit_transform(X, y)
visualizer.poof()
../../_images/pca_biplot_2d.png
visualizer = PCADecomposition(scale=True, proj_features=True, proj_dim=3)
visualizer.fit_transform(X, y)
visualizer.poof()
../../_images/pca_biplot_3d.png

API Reference

Decomposition based feature visualization with PCA.

class yellowbrick.features.pca.PCADecomposition(ax=None, features=None, scale=True, proj_dim=2, proj_features=False, color=None, colormap='RdBu', random_state=None, **kwargs)[source]

Bases: yellowbrick.features.base.MultiFeatureVisualizer

Produce a two or three dimensional principal component plot of a data array projected onto it’s largest sequential principal components. It is common practice to scale the data array X before applying a PC decomposition. Variable scaling can be controlled using the scale argument.

Parameters:
ax : matplotlib Axes, default: None

The axes to plot the figure on. If None is passed in the current axes. will be used (or generated if required).

features: list, default: None

a list of feature names to use If a DataFrame is passed to fit and features is None, feature names are selected as the columns of the DataFrame.

scale : bool, default: True

Boolean that indicates if user wants to scale data.

proj_dim : int, default: 2

Dimension of the PCA visualizer.

proj_features : bool, default: False

Boolean that indicates if the user wants to project the features in the projected space. If True the plot will be similar to a biplot.

color : list or tuple of colors, default: None

Specify the colors for each individual class.

colormap : string or cmap, default: None

Optional string or matplotlib cmap to colorize lines. Use either color to colorize the lines on a per class basis or colormap to color them on a continuous scale.

random_state : int, RandomState instance or None, optional (default None)

If input data is larger than 500x500 and the number of components to extract is lower than 80% of the smallest dimension of the data, then the more efficient randomized solver is enabled, this parameter sets the random state on this solver.

kwargs : dict

Keyword arguments that are passed to the base class and may influence the visualization as defined in other Visualizers.

Examples

>>> from sklearn import datasets
>>> iris = datasets.load_iris()
>>> X = iris.data
>>> y = iris.target
>>> visualizer = PCADecomposition()
>>> visualizer.fit_transform(X)
>>> visualizer.poof()
draw(**kwargs)[source]

The fitting or transformation process usually calls draw (not the user). This function is implemented for developers to hook into the matplotlib interface and to create an internal representation of the data the visualizer was trained on in the form of a figure or axes.

Parameters:
kwargs: dict

generic keyword arguments.

finalize(**kwargs)[source]

Finalize executes any subclass-specific axes finalization steps.

Parameters:
kwargs: dict

generic keyword arguments.

Notes

The user calls poof and poof calls finalize. Developers should implement visualizer-specific finalization methods like setting titles or axes labels, etc.

fit(X, y=None, **kwargs)[source]

Fits the PCA transformer, transforms the data in X, then draws the decomposition in either 2D or 3D space as a scatter plot.

Parameters:
X : ndarray or DataFrame of shape n x m

A matrix of n instances with m features.

y : ndarray or Series of length n

An array or series of target or class values.

Returns:
self : visualizer

Returns self for use in Pipelines

transform(X, y=None, **kwargs)[source]

Primarily a pass-through to ensure that the feature visualizer will work in a pipeline setting. This method can also call drawing methods in order to ensure that the visualization is constructed.

This method must return a numpy array with the same shape as X.