train_test_split sklearn scikit recall learn confusion accuracy python scikit-learn cluster-analysis dendrogram

python - scikit - matriz de enlace de agrupación aglomerativa sklearn



sklearn precision recall (3)

Es posible, pero no es bonito. Requiere (como mínimo) una pequeña reescritura de AgglomerativeClustering.fit ( source ). La dificultad es que el método requiere una cantidad de importaciones, por lo que termina teniendo un aspecto desagradable. Para agregar en esta característica:

  1. Inserte la siguiente línea después de la línea 748:

    kwargs [''return_distance''] = Verdadero

  2. Reemplace la línea 752 con:

    self.children_, self.n_components_, self.n_leaves_, parents, self.distance = /

Esto te dará un nuevo atributo, la distance , al que puedes llamar fácilmente.

Un par de cosas a tener en cuenta:

  1. Al hacer esto, me encontré con this problema sobre la función check_array en la línea 711. Esto se puede arreglar usando check_arrays ( from sklearn.utils.validation import check_arrays ). Puede modificar esa línea para que se convierta en X = check_arrays(X)[0] . Esto parece ser un error (todavía tengo este problema en la versión más reciente de scikit-learn).

  2. Dependiendo de la versión de sklearn.cluster.hierarchical.linkage_tree que tenga, es posible que también deba modificarla para que sea la que se proporciona en la source .

Para facilitar las cosas a todos, aquí está el código completo que deberá usar:

from heapq import heapify, heappop, heappush, heappushpop import warnings import sys import numpy as np from scipy import sparse from sklearn.base import BaseEstimator, ClusterMixin from sklearn.externals.joblib import Memory from sklearn.externals import six from sklearn.utils.validation import check_arrays from sklearn.utils.sparsetools import connected_components from sklearn.cluster import _hierarchical from sklearn.cluster.hierarchical import ward_tree from sklearn.cluster._feature_agglomeration import AgglomerationTransform from sklearn.utils.fast_dict import IntFloatDict def _fix_connectivity(X, connectivity, n_components=None, affinity="euclidean"): """ Fixes the connectivity matrix - copies it - makes it symmetric - converts it to LIL if necessary - completes it if necessary """ n_samples = X.shape[0] if (connectivity.shape[0] != n_samples or connectivity.shape[1] != n_samples): raise ValueError(''Wrong shape for connectivity matrix: %s '' ''when X is %s'' % (connectivity.shape, X.shape)) # Make the connectivity matrix symmetric: connectivity = connectivity + connectivity.T # Convert connectivity matrix to LIL if not sparse.isspmatrix_lil(connectivity): if not sparse.isspmatrix(connectivity): connectivity = sparse.lil_matrix(connectivity) else: connectivity = connectivity.tolil() # Compute the number of nodes n_components, labels = connected_components(connectivity) if n_components > 1: warnings.warn("the number of connected components of the " "connectivity matrix is %d > 1. Completing it to avoid " "stopping the tree early." % n_components, stacklevel=2) # XXX: Can we do without completing the matrix? for i in xrange(n_components): idx_i = np.where(labels == i)[0] Xi = X[idx_i] for j in xrange(i): idx_j = np.where(labels == j)[0] Xj = X[idx_j] D = pairwise_distances(Xi, Xj, metric=affinity) ii, jj = np.where(D == np.min(D)) ii = ii[0] jj = jj[0] connectivity[idx_i[ii], idx_j[jj]] = True connectivity[idx_j[jj], idx_i[ii]] = True return connectivity, n_components # average and complete linkage def linkage_tree(X, connectivity=None, n_components=None, n_clusters=None, linkage=''complete'', affinity="euclidean", return_distance=False): """Linkage agglomerative clustering based on a Feature matrix. The inertia matrix uses a Heapq-based representation. This is the structured version, that takes into account some topological structure between samples. Parameters ---------- X : array, shape (n_samples, n_features) feature matrix representing n_samples samples to be clustered connectivity : sparse matrix (optional). connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. The matrix is assumed to be symmetric and only the upper triangular half is used. Default is None, i.e, the Ward algorithm is unstructured. n_components : int (optional) Number of connected components. If None the number of connected components is estimated from the connectivity matrix. NOTE: This parameter is now directly determined directly from the connectivity matrix and will be removed in 0.18 n_clusters : int (optional) Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. In this case, the complete tree is not computed, thus the ''children'' output is of limited use, and the ''parents'' output should rather be used. This option is valid only when specifying a connectivity matrix. linkage : {"average", "complete"}, optional, default: "complete" Which linkage critera to use. The linkage criterion determines which distance to use between sets of observation. - average uses the average of the distances of each observation of the two sets - complete or maximum linkage uses the maximum distances between all observations of the two sets. affinity : string or callable, optional, default: "euclidean". which metric to use. Can be "euclidean", "manhattan", or any distance know to paired distance (see metric.pairwise) return_distance : bool, default False whether or not to return the distances between the clusters. Returns ------- children : 2D array, shape (n_nodes-1, 2) The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i` n_components : int The number of connected components in the graph. n_leaves : int The number of leaves in the tree. parents : 1D array, shape (n_nodes, ) or None The parent of each node. Only returned when a connectivity matrix is specified, elsewhere ''None'' is returned. distances : ndarray, shape (n_nodes-1,) Returned when return_distance is set to True. distances[i] refers to the distance between children[i][0] and children[i][1] when they are merged. See also -------- ward_tree : hierarchical clustering with ward linkage """ X = np.asarray(X) if X.ndim == 1: X = np.reshape(X, (-1, 1)) n_samples, n_features = X.shape linkage_choices = {''complete'': _hierarchical.max_merge, ''average'': _hierarchical.average_merge, } try: join_func = linkage_choices[linkage] except KeyError: raise ValueError( ''Unknown linkage option, linkage should be one '' ''of %s, but %s was given'' % (linkage_choices.keys(), linkage)) if connectivity is None: from scipy.cluster import hierarchy # imports PIL if n_clusters is not None: warnings.warn(''Partial build of the tree is implemented '' ''only for structured clustering (i.e. with '' ''explicit connectivity). The algorithm '' ''will build the full tree and only '' ''retain the lower branches required '' ''for the specified number of clusters'', stacklevel=2) if affinity == ''precomputed'': # for the linkage function of hierarchy to work on precomputed # data, provide as first argument an ndarray of the shape returned # by pdist: it is a flat array containing the upper triangular of # the distance matrix. i, j = np.triu_indices(X.shape[0], k=1) X = X[i, j] elif affinity == ''l2'': # Translate to something understood by scipy affinity = ''euclidean'' elif affinity in (''l1'', ''manhattan''): affinity = ''cityblock'' elif callable(affinity): X = affinity(X) i, j = np.triu_indices(X.shape[0], k=1) X = X[i, j] out = hierarchy.linkage(X, method=linkage, metric=affinity) children_ = out[:, :2].astype(np.int) if return_distance: distances = out[:, 2] return children_, 1, n_samples, None, distances return children_, 1, n_samples, None if n_components is not None: warnings.warn( "n_components is now directly calculated from the connectivity " "matrix and will be removed in 0.18", DeprecationWarning) connectivity, n_components = _fix_connectivity(X, connectivity) connectivity = connectivity.tocoo() # Put the diagonal to zero diag_mask = (connectivity.row != connectivity.col) connectivity.row = connectivity.row[diag_mask] connectivity.col = connectivity.col[diag_mask] connectivity.data = connectivity.data[diag_mask] del diag_mask if affinity == ''precomputed'': distances = X[connectivity.row, connectivity.col] else: # FIXME We compute all the distances, while we could have only computed # the "interesting" distances distances = paired_distances(X[connectivity.row], X[connectivity.col], metric=affinity) connectivity.data = distances if n_clusters is None: n_nodes = 2 * n_samples - 1 else: assert n_clusters <= n_samples n_nodes = 2 * n_samples - n_clusters if return_distance: distances = np.empty(n_nodes - n_samples) # create inertia heap and connection matrix A = np.empty(n_nodes, dtype=object) inertia = list() # LIL seems to the best format to access the rows quickly, # without the numpy overhead of slicing CSR indices and data. connectivity = connectivity.tolil() # We are storing the graph in a list of IntFloatDict for ind, (data, row) in enumerate(zip(connectivity.data, connectivity.rows)): A[ind] = IntFloatDict(np.asarray(row, dtype=np.intp), np.asarray(data, dtype=np.float64)) # We keep only the upper triangular for the heap # Generator expressions are faster than arrays on the following inertia.extend(_hierarchical.WeightedEdge(d, ind, r) for r, d in zip(row, data) if r < ind) del connectivity heapify(inertia) # prepare the main fields parent = np.arange(n_nodes, dtype=np.intp) used_node = np.ones(n_nodes, dtype=np.intp) children = [] # recursive merge loop for k in xrange(n_samples, n_nodes): # identify the merge while True: edge = heappop(inertia) if used_node[edge.a] and used_node[edge.b]: break i = edge.a j = edge.b if return_distance: # store distances distances[k - n_samples] = edge.weight parent[i] = parent[j] = k children.append((i, j)) # Keep track of the number of elements per cluster n_i = used_node[i] n_j = used_node[j] used_node[k] = n_i + n_j used_node[i] = used_node[j] = False # update the structure matrix A and the inertia matrix # a clever ''min'', or ''max'' operation between A[i] and A[j] coord_col = join_func(A[i], A[j], used_node, n_i, n_j) for l, d in coord_col: A[l].append(k, d) # Here we use the information from coord_col (containing the # distances) to update the heap heappush(inertia, _hierarchical.WeightedEdge(d, k, l)) A[k] = coord_col # Clear A[i] and A[j] to save memory A[i] = A[j] = 0 # Separate leaves in children (empty lists up to now) n_leaves = n_samples # # return numpy array for efficient caching children = np.array(children)[:, ::-1] if return_distance: return children, n_components, n_leaves, parent, distances return children, n_components, n_leaves, parent # Matching names to tree-building strategies def _complete_linkage(*args, **kwargs): kwargs[''linkage''] = ''complete'' return linkage_tree(*args, **kwargs) def _average_linkage(*args, **kwargs): kwargs[''linkage''] = ''average'' return linkage_tree(*args, **kwargs) _TREE_BUILDERS = dict( ward=ward_tree, complete=_complete_linkage, average=_average_linkage, ) def _hc_cut(n_clusters, children, n_leaves): """Function cutting the ward tree for a given number of clusters. Parameters ---------- n_clusters : int or ndarray The number of clusters to form. children : list of pairs. Length of n_nodes The children of each non-leaf node. Values less than `n_samples` refer to leaves of the tree. A greater value `i` indicates a node with children `children[i - n_samples]`. n_leaves : int Number of leaves of the tree. Returns ------- labels : array [n_samples] cluster labels for each point """ if n_clusters > n_leaves: raise ValueError(''Cannot extract more clusters than samples: '' ''%s clusters where given for a tree with %s leaves.'' % (n_clusters, n_leaves)) # In this function, we store nodes as a heap to avoid recomputing # the max of the nodes: the first element is always the smallest # We use negated indices as heaps work on smallest elements, and we # are interested in largest elements # children[-1] is the root of the tree nodes = [-(max(children[-1]) + 1)] for i in xrange(n_clusters - 1): # As we have a heap, nodes[0] is the smallest element these_children = children[-nodes[0] - n_leaves] # Insert the 2 children and remove the largest node heappush(nodes, -these_children[0]) heappushpop(nodes, -these_children[1]) label = np.zeros(n_leaves, dtype=np.intp) for i, node in enumerate(nodes): label[_hierarchical._hc_get_descendent(-node, children, n_leaves)] = i return label class AgglomerativeClustering(BaseEstimator, ClusterMixin): """ Agglomerative Clustering Recursively merges the pair of clusters that minimally increases a given linkage distance. Parameters ---------- n_clusters : int, default=2 The number of clusters to find. connectivity : array-like or callable, optional Connectivity matrix. Defines for each sample the neighboring samples following a given structure of the data. This can be a connectivity matrix itself or a callable that transforms the data into a connectivity matrix, such as derived from kneighbors_graph. Default is None, i.e, the hierarchical clustering algorithm is unstructured. affinity : string or callable, default: "euclidean" Metric used to compute the linkage. Can be "euclidean", "l1", "l2", "manhattan", "cosine", or ''precomputed''. If linkage is "ward", only "euclidean" is accepted. memory : Instance of joblib.Memory or string (optional) Used to cache the output of the computation of the tree. By default, no caching is done. If a string is given, it is the path to the caching directory. n_components : int (optional) Number of connected components. If None the number of connected components is estimated from the connectivity matrix. NOTE: This parameter is now directly determined from the connectivity matrix and will be removed in 0.18 compute_full_tree : bool or ''auto'' (optional) Stop early the construction of the tree at n_clusters. This is useful to decrease computation time if the number of clusters is not small compared to the number of samples. This option is useful only when specifying a connectivity matrix. Note also that when varying the number of clusters and using caching, it may be advantageous to compute the full tree. linkage : {"ward", "complete", "average"}, optional, default: "ward" Which linkage criterion to use. The linkage criterion determines which distance to use between sets of observation. The algorithm will merge the pairs of cluster that minimize this criterion. - ward minimizes the variance of the clusters being merged. - average uses the average of the distances of each observation of the two sets. - complete or maximum linkage uses the maximum distances between all observations of the two sets. pooling_func : callable, default=np.mean This combines the values of agglomerated features into a single value, and should accept an array of shape [M, N] and the keyword argument ``axis=1``, and reduce it to an array of size [M]. Attributes ---------- labels_ : array [n_samples] cluster labels for each point n_leaves_ : int Number of leaves in the hierarchical tree. n_components_ : int The estimated number of connected components in the graph. children_ : array-like, shape (n_nodes-1, 2) The children of each non-leaf node. Values less than `n_samples` correspond to leaves of the tree which are the original samples. A node `i` greater than or equal to `n_samples` is a non-leaf node and has children `children_[i - n_samples]`. Alternatively at the i-th iteration, children[i][0] and children[i][1] are merged to form node `n_samples + i` """ def __init__(self, n_clusters=2, affinity="euclidean", memory=Memory(cachedir=None, verbose=0), connectivity=None, n_components=None, compute_full_tree=''auto'', linkage=''ward'', pooling_func=np.mean): self.n_clusters = n_clusters self.memory = memory self.n_components = n_components self.connectivity = connectivity self.compute_full_tree = compute_full_tree self.linkage = linkage self.affinity = affinity self.pooling_func = pooling_func def fit(self, X, y=None): """Fit the hierarchical clustering on the data Parameters ---------- X : array-like, shape = [n_samples, n_features] The samples a.k.a. observations. Returns ------- self """ X = check_arrays(X)[0] memory = self.memory if isinstance(memory, six.string_types): memory = Memory(cachedir=memory, verbose=0) if self.linkage == "ward" and self.affinity != "euclidean": raise ValueError("%s was provided as affinity. Ward can only " "work with euclidean distances." % (self.affinity, )) if self.linkage not in _TREE_BUILDERS: raise ValueError("Unknown linkage type %s." "Valid options are %s" % (self.linkage, _TREE_BUILDERS.keys())) tree_builder = _TREE_BUILDERS[self.linkage] connectivity = self.connectivity if self.connectivity is not None: if callable(self.connectivity): connectivity = self.connectivity(X) connectivity = check_arrays( connectivity, accept_sparse=[''csr'', ''coo'', ''lil'']) n_samples = len(X) compute_full_tree = self.compute_full_tree if self.connectivity is None: compute_full_tree = True if compute_full_tree == ''auto'': # Early stopping is likely to give a speed up only for # a large number of clusters. The actual threshold # implemented here is heuristic compute_full_tree = self.n_clusters < max(100, .02 * n_samples) n_clusters = self.n_clusters if compute_full_tree: n_clusters = None # Construct the tree kwargs = {} kwargs[''return_distance''] = True if self.linkage != ''ward'': kwargs[''linkage''] = self.linkage kwargs[''affinity''] = self.affinity self.children_, self.n_components_, self.n_leaves_, parents, / self.distance = memory.cache(tree_builder)(X, connectivity, n_components=self.n_components, n_clusters=n_clusters, **kwargs) # Cut the tree if compute_full_tree: self.labels_ = _hc_cut(self.n_clusters, self.children_, self.n_leaves_) else: labels = _hierarchical.hc_get_heads(parents, copy=False) # copy to avoid holding a reference on the original array labels = np.copy(labels[:n_samples]) # Reasign cluster numbers self.labels_ = np.searchsorted(np.unique(labels), labels) return self

A continuación se muestra un ejemplo simple que muestra cómo usar la clase AgglomerativeClustering modificada:

import numpy as np import AgglomerativeClustering # Make sure to use the new one!!! d = np.array( [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] ) clustering = AgglomerativeClustering(n_clusters=2, compute_full_tree=True, affinity=''euclidean'', linkage=''complete'') clustering.fit(d) print clustering.distance

Ese ejemplo tiene la siguiente salida:

[ 5.19615242 10.39230485]

Esto se puede comparar con una implementación de scipy.cluster.hierarchy.linkage :

import numpy as np from scipy.cluster.hierarchy import linkage d = np.array( [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ] ) print linkage(d, ''complete'')

Salida:

[[ 1. 2. 5.19615242 2. ] [ 0. 3. 10.39230485 3. ]]

Sólo por gusto decidí hacer un seguimiento de su declaración sobre el rendimiento:

import AgglomerativeClustering from scipy.cluster.hierarchy import linkage import numpy as np import time l = 1000; iters = 50 d = [np.random.random(100) for _ in xrange(1000)] t = time.time() for _ in xrange(iters): clustering = AgglomerativeClustering(n_clusters=l-1, affinity=''euclidean'', linkage=''complete'') clustering.fit(d) scikit_time = (time.time() - t) / iters print ''scikit-learn Time: {0}s''.format(scikit_time) t = time.time() for _ in xrange(iters): linkage(d, ''complete'') scipy_time = (time.time() - t) / iters print ''SciPy Time: {0}s''.format(scipy_time) print ''scikit-learn Speedup: {0}''.format(scipy_time / scikit_time)

Esto me dio los siguientes resultados:

scikit-learn Time: 0.566560001373s SciPy Time: 0.497740001678s scikit-learn Speedup: 0.878530077083

De acuerdo con esto, la implementación de Scikit-Learn toma 0.88x el tiempo de ejecución de la implementación de SciPy, es decir, la implementación de SciPy es 1.14x más rápida. Se debe notar que:

  1. Modifiqué la implementación original de scikit-learn.

  2. Solo hice un pequeño número de iteraciones.

  3. Solo probé un pequeño número de casos de prueba (tanto el tamaño del clúster como el número de elementos por dimensión deben probarse)

  4. Ejecuté SciPy en segundo lugar, por lo que tiene la ventaja de obtener más visitas de caché en los datos de origen.

  5. Los dos métodos no hacen exactamente lo mismo.

Teniendo todo esto en cuenta, realmente debería evaluar qué método funciona mejor para su aplicación específica. También hay razones funcionales para ir con una implementación sobre la otra.

Estoy tratando de dibujar un enlace completo scipy.cluster.hierarchy.dendrogram , y encontré que scipy.cluster.hierarchy.linkage es más lento que sklearn.AgglomerativeClustering .

Sin embargo, sklearn.AgglomerativeClustering no devuelve la distancia entre los grupos y el número de observaciones originales, que scipy.cluster.hierarchy.dendrogram necesita. ¿Hay alguna forma de llevarlos?


Hice un fragmento para hacerlo sin modificar sklearn y sin funciones recursivas. Antes de usar tenga en cuenta que:

  • La distancia de fusión a veces puede disminuir con respecto a la distancia de fusión de los niños. Agregué tres formas de manejar esos casos: tome el máximo, no haga nada o aumente con la norma l2. La lógica de la norma l2 no ha sido verificada todavía. Por favor, compruebe lo que más le convenga.

Importar los paquetes:

from sklearn.cluster import AgglomerativeClustering import numpy as np import matplotlib.pyplot as plt from scipy.cluster.hierarchy import dendrogram

Función para calcular pesos y distancias:

def get_distances(X,model,mode=''l2''): distances = [] weights = [] children=model.children_ dims = (X.shape[1],1) distCache = {} weightCache = {} for childs in children: c1 = X[childs[0]].reshape(dims) c2 = X[childs[1]].reshape(dims) c1Dist = 0 c1W = 1 c2Dist = 0 c2W = 1 if childs[0] in distCache.keys(): c1Dist = distCache[childs[0]] c1W = weightCache[childs[0]] if childs[1] in distCache.keys(): c2Dist = distCache[childs[1]] c2W = weightCache[childs[1]] d = np.linalg.norm(c1-c2) cc = ((c1W*c1)+(c2W*c2))/(c1W+c2W) X = np.vstack((X,cc.T)) newChild_id = X.shape[0]-1 # How to deal with a higher level cluster merge with lower distance: if mode==''l2'': # Increase the higher level cluster size suing an l2 norm added_dist = (c1Dist**2+c2Dist**2)**0.5 dNew = (d**2 + added_dist**2)**0.5 elif mode == ''max'': # If the previrous clusters had higher distance, use that one dNew = max(d,c1Dist,c2Dist) elif mode == ''actual'': # Plot the actual distance. dNew = d wNew = (c1W + c2W) distCache[newChild_id] = dNew weightCache[newChild_id] = wNew distances.append(dNew) weights.append( wNew) return distances, weights

Hacer datos de muestra de 2 grupos con 2 subgrupos:

# Make 4 distributions, two of which form a bigger cluster X1_1 = np.random.randn(25,2)+[8,1.5] X1_2 = np.random.randn(25,2)+[8,-1.5] X2_1 = np.random.randn(25,2)-[8,3] X2_2 = np.random.randn(25,2)-[8,-3] # Merge the four distributions X = np.vstack([X1_1,X1_2,X2_1,X2_2]) # Plot the clusters colors = [''r'']*25 + [''b'']*25 + [''g'']*25 + [''y'']*25 plt.scatter(X[:,0],X[:,1],c=colors)

Data de muestra:

Ajustar el modelo de agrupamiento

model = AgglomerativeClustering(n_clusters=2,linkage="ward") model.fit(X)

Llame a la función para encontrar las distancias y pásela al dendograma

distance, weight = get_distances(X,model) linkage_matrix = np.column_stack([model.children_, distance, weight]).astype(float) plt.figure(figsize=(20,10)) dendrogram(linkage_matrix) plt.show()

Dendograma de salida:


Deberá generar una "matriz de enlace" desde la matriz children_, donde cada fila de la matriz de enlace tenga el formato [idx1, idx2, distance, sample_count].

Esto no pretende ser una solución de pegar y ejecutar, no estoy haciendo un seguimiento de lo que necesitaba importar, pero debería ser bastante claro de todos modos.

Aquí hay una forma de generar la estructura Z requerida y visualizar el resultado.

X es su n_samples x n_features datos de entrada

racimo

agg_cluster = sklearn.cluster.AgglomerativeClustering(n_clusters=n) agg_labels = agg_cluster.fit_predict(X)

algunas estructuras de datos vacias

Z = [] # should really call this cluster dict node_dict = {} n_samples = len(X)

escriba una función recursiva para reunir todos los nodos de hoja asociados con un grupo dado, distancias de cómputo y posiciones de centroide

def get_all_children(k, verbose=False): i,j = agg_cluster.children_[k] if k in node_dict: return node_dict[k][''children''] if i < leaf_count: left = [i] else: # read the AgglomerativeClustering doc. to see why I select i-n_samples left = get_all_children(i-n_samples) if j < leaf_count: right = [j] else: right = get_all_children(j-n_samples) if verbose: print k,i,j,left, right left_pos = np.mean(map(lambda ii: X[ii], left),axis=0) right_pos = np.mean(map(lambda ii: X[ii], right),axis=0) # this assumes that agg_cluster used euclidean distances dist = metrics.pairwise_distances([left_pos,right_pos],metric=''euclidean'')[0,1] all_children = [x for y in [left,right] for x in y] pos = np.mean(map(lambda ii: X[ii], all_children),axis=0) # store the results to speed up any additional or recursive evaluations node_dict[k] = {''top_child'':[i,j],''children'':all_children, ''pos'':pos,''dist'':dist, ''node_i'':k + n_samples} return all_children #return node_di|ct

node_dict y genere Z - con distancia y n_muestras por nodo

for k,x in enumerate(agg_cluster.children_): get_all_children(k,verbose=False) # Every row in the linkage matrix has the format [idx1, idx2, distance, sample_count]. Z = [[v[''top_child''][0],v[''top_child''][1],v[''dist''],len(v[''children''])] for k,v in node_dict.iteritems()] # create a version with log scaled distances for easier visualization Z_log =[[v[''top_child''][0],v[''top_child''][1],np.log(1.0+v[''dist'']),len(v[''children''])] for k,v in node_dict.iteritems()]

trazarlo usando dendrograma scipy

from scipy.cluster import hierarchy plt.figure() dn = hierarchy.dendrogram(Z_log,p=4,truncate_mode=''level'') plt.show()

déjese decepcionar por lo opaca que es esta visualización y desearía poder desglosar interactivamente en grupos más grandes y examinar las distancias direccionales (no escalares) entre los centroides :( - ¿tal vez exista una solución bokeh?

referencias

http://docs.scipy.org/doc/scipy/reference/generated/scipy.cluster.hierarchy.dendrogram.html

https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/#Selecting-a-Distance-Cut-Off-aka-Determining-the-Number-of-Clusters