It's Scanpy friendly!
** A python package for spectral clustering based on the
powerful suite of tools named
too-many-cells.
In essence, you can use toomanycells to partition a data set
in the form of a matrix of integers or floating point numbers
into clusters, where members of a cluster are similar to each
other. The rows represent observations and the
columns are the features. However, sometimes just knowing the
clusters is not sufficient. Often, we are insterested on the
relationships between the clusters, and this tool can help
you visualize the clusters as leaf nodes of a tree, where the
branches illustrate the trajectories that have to be followed
to reach a particular cluster. Initially, this tool will
partition your data set into two subsets (each subset is a
node of the tree), trying to maximize
the differences between the two. Subsequently, it will
reapply that same criterion to each subset (node) and will
continue bifurcating until the
modularity
of the node that is about to be partitioned becomes less
than a given threshold value (
- Free software: GNU AFFERO GENERAL PUBLIC LICENSE
- Documentation: https://JRR3.github.io/toomanycells
Version 1.0.40 no longer requires Graphviz. Thus, no dependencies!
To have control of your working environment you can use a python virtual environment, which can help you keep only the packages you need in one location. In bash or zsh you can simply type
python -m venv /path/to/new/virtual/environment
To activate it you simply need
source pathToTheVirtualEnvironment/bin/activate
To deactivate the environment use the intuitive
deactivate
Just type
pip install toomanycells==1.0.42
in your home or custom environment. If you want to install an updated version, then use the following flag.
pip install toomanycells -U
Make sure you have the latest version. If not, run the previous command again.
If you want to see a concrete example of how to use toomanycells, check out the jupyter notebook demo.
-
First import the module as follows
from toomanycells import TooManyCells as tmc
-
If you already have an AnnData object
A
loaded into memory, then you can create a TooManyCells object withtmc_obj = tmc(A)
In this case the output folder will be called
tmc_outputs
. However, if you want the output folder to be a particular directory, then you can specify the path as followstmc_obj = tmc(A, output_directory)
-
If instead of providing an AnnData object you want to provide the directory where your data is located, you can use the syntax
tmc_obj = tmc(input_directory, output_directory)
-
If your input directory has a file in the matrix market format, then you have to specify this information by using the following flag
tmc_obj = tmc(input_directory, output_directory, input_is_matrix_market=True)
Under this scenario, the input_directory
must contain a
.mtx
file, a barcodes.tsv
file (the observations), and
a genes.tsv
(the features).
- Once your data has been loaded successfully, you can start the clustering process with the following command
tmc_obj.run_spectral_clustering()
In my desktop computer processing a data set with ~90K cells (observations) and ~30K genes (features) took a little less than 6 minutes in 1809 iterations. For a larger data set like the Tabula Sapiens with 483,152 cells and 58,870 genes (14.51 GB in zip format) the total time was about 50 minutes in the same computer.
- At the end of the clustering process the
.obs
data frame of the AnnData object should have two columns named['sp_cluster', 'sp_path']
which contain the cluster labels and the path from the root node to the leaf node, respectively.tmc_obj.A.obs[['sp_cluster', 'sp_path']]
- To generate the outputs, just call the function
tmc_obj.store_outputs()
This call will generate JSON file
containing the nodes and edges of the graph (graph.json
),
one CSV file that describes the cluster
information (clusters.csv
), another CSV file containing
the information of each node (node_info.csv
), and two
JSON files. One relates cells to clusters
(cluster_list.json
), and the other has the
full tree structure (cluster_tree.json
). You need this
last file for too-many-cells interactive (TMCI).
- If you already have the
graph.json
file you can load it withtmc_obj.load_graph(json_fname="some_path")
- If you want to visualize your results in a dynamic
platform, I strongly recommend the tool
too-many-cells-interactive.
To use it, first make sure that you have Docker Compose and
Docker. One simple way of getting the two is by installing
Docker Desktop.
Note that with MacOS the instructions are slightly different.
If you use Nix, simply
add the packages
pkgs.docker
andpkgs.docker-compose
to your configuration orhome.nix
file and run
home-manager switch
- If you installed Docker Desktop you probably don't need to follow this step. However, under some distributions the following two commands have proven to be essential. Use
sudo dockerd
to start the daemon service for docker containers and
sudo chmod 666 /var/run/docker.sock
to let Docker read and write to that location.
- Now clone the repository
git clone https://github.com/schwartzlab-methods/too-many-cells-interactive.git
and store the path to the too-many-cells-interactive
folder in a variable, for example
path_to_tmc_interactive
. Also, you will need to identify
a column in your AnnData.obs
data frame that has the
labels for the cells. Let's assume that the column name is
stored in the variable cell_annotations
. Lastly, you can
provide a port number to host your visualization, for
instance port_id=1234
. Then, you can call the function
tmc_obj.visualize_with_tmc_interactive(
path_to_tmc_interactive,
cell_annotations,
port_id)
The following visualization corresponds to the data set with ~90K cells (observations).
And this is the visualization for the Tabula Sapiens data set with ~480K cells.
To answer that question we have created the following benchmark. We tested the performance of toomanycells in 20 data sets having the following number of cells: 6360, 10479, 12751, 16363, 23973, 32735, 35442, 40784, 48410, 53046, 57621, 62941, 68885, 76019, 81449, 87833, 94543, 101234, 107809, 483152. The range goes from thousands of cells to almost half a million cells. These are the results.
As you can see, the program behaves linearly with respect to the size of the input. In other words, the observations fit the model
When visualizing the tree, we often are interested on
observing how different cell types distribute across the
branches of the tree. In case your AnnData object lacks
a cell annotation column in the obs
data frame, or
if you already have one but you want to try a different
method, we have created a wrapper function that calls
CellTypist. Simply
write
tmc_obj.annotate_with_celltypist(
column_label_for_cell_annotations,
)
and the obs
data frame of your AnnData object will
have a column named like the string stored under the
column_label_for_cell_annotations
variable.
By default we use the Immune_All_High
celltypist
model that contains 32 cell types. If you want to use
another model, simply write
tmc_obj.annotate_with_celltypist(
column_label_for_cell_annotations,
celltypist_model,
)
where celltypist_model
describes the type of model
to use by the library. For example, if this
variable is equal to Immune_All_Low
, then the number
of possible cell types increases to 98.
For a complete list of all the models, see the following
list. Lastly,
if you want to use the fact that transcriptionally similar
cells are likely to cluster together, you can assign the cell
type labels on a cluster-by-cluster basis
rather than a cell-by-cell basis. To activate this
feature, use the call
tmc_obj.annotate_with_celltypist(
column_label_for_cell_annotations,
celltypist_model,
use_majority_voting = True,
)
Work in progress...
Imagine you want to compare the heterogeneity of cell populations belonging to different branches of the toomanycells tree. By branch we mean all the nodes that derive from a particular node, including the node that defines the branch in question. For example, we want to compare branch 1183 against branch 2. One way to do this is by comparing the modularity distribution and the cumulative modularity for all the nodes that belong to each branch. We can do that using the following calls. First for branch 1183
tmc_obj.quantify_heterogeneity(
list_of_branches=[1183],
use_log_y=true,
tag="branch_A",
show_column_totals=true,
color="blue",
file_format="svg")
And then for branch 2
tmc_obj.quantify_heterogeneity(
list_of_branches=[2],
use_log_y=true,
tag="branch_B",
show_column_totals=true,
color="red",
file_format="svg")
Note that you can include multiple nodes in the list of branches. From these figures we observe that the higher cumulative modularity of branch 1183 with respect to branch 2 suggests that the former has a higher degree of heterogeneity. However, just relying in modularity could provide a misleading interpretation. For example, consider the following scenario where the numbers within the nodes indicate the modularity at that node.
In this case, scenario A has a larger cumulative modularity, but we note that scenario B is more heterogeneous. For that reason we recommend also computing additional diversity measures. First, we need some notation. For all the branches belonging to the list of branches in the above function
quantify_heterogeneity
, let
then we define the following diversity measure
In general, the larger the value of
When
which represents the
probability that two cells picked at random belong
to the same species. Hence, the higher the Simpson's
index, the less diverse is the ecosystem.
Lastly, when
In the above example, for branch 1183 we obtain
value
Richness 460.000000
Shannon 5.887544
Simpson 0.003361
MaxProp 0.010369
q = 0 460.000000
q = 1 360.518784
q = 2 297.562094
q = inf 96.442786
and for branch 2 we obtain
value
Richness 280.000000
Shannon 5.500414
Simpson 0.004519
MaxProp 0.010750
q = 0 280.000000
q = 1 244.793371
q = 2 221.270778
q = inf 93.021531
After comparing the results using two different measures, namely, modularity and diversity, we conclude that branch 1183 is more heterogeneous than branch 2.
So far we have assumed that the similarity matrix
However, this is not the only way to compute a similarity matrix. We will list all the available similarity functions and how to call them.
If your matrix is sparse, i.e., the number of nonzero
entries is proportional to the number of samples (
tmc_obj.run_spectral_clustering(
similarity_function="cosine_sparse")
By default we use the Halko-Martinsson-Tropp algorithm to compute the truncated singular value decomposition. However, the ARPACK library (written in Fortran) is also available.
tmc_obj.run_spectral_clustering(
similarity_function="cosine_sparse",
svd_algorithm="arpack")
If
If your matrix is dense, and you want to use the cosine similarity, then use the following instruction.
tmc_obj.run_spectral_clustering(
similarity_function="cosine")
The same comment about negative entries applies here.
However, there is a simple solution. While shifting
the matrix of observations can drastically change the
interpretation of the data because each column lives
in a different (gene) space, shifting the similarity
matrix is actually a reasonable method to remove negative
entries. The reason is that similarities live in an
ordered space and shifting by a constant is
an order-preserving transformation. Equivalently,
if the similarity between
tmc_obj.run_spectral_clustering(
similarity_function="cosine",
shift_similarity_matrix=1)
Note that since the range of the cosine similarity
is
The similarity matrix is
This is an example:
tmc_obj.run_spectral_clustering(
similarity_function="laplacian",
similarity_gamma=0.01)
This function is very sensitive to
a smaller value for
The similarity matrix is
This is an example:
tmc_obj.run_spectral_clustering(
similarity_function="gaussian",
similarity_gamma=0.001)
As before, this function is very sensitive to
The similarity matrix is
where
tmc_obj.run_spectral_clustering(
similarity_function="div_by_sum")
If you want to use the inverse document frequency (IDF) normalization, then use
tmc_obj.run_spectral_clustering(
similarity_function="some_sim_function",
use_tf_idf=True)
If you also want to normalize the frequencies to
unit norm with the
tmc_obj.run_spectral_clustering(
similarity_function="some_sim_function",
use_tf_idf=True,
tf_idf_norm="l2")
If instead you want to use the
Sometimes normalizing your matrix of observations can improve the performance of some routines. To normalize the rows, use the following instruction.
tmc_obj.run_spectral_clustering(
similarity_function="some_sim_function",
normalize_rows=True)
Be default, the
tmc_obj.run_spectral_clustering(
similarity_function="some_sim_function",
normalize_rows=True,
similarity_norm=p)
Imagine you have the following tree structure after
running toomanycells.
Further, assume that the colors denote different classes
satisfying specific properties. We want to know how the
expression of two genes, for instance, Gene S
and Gene T
,
fluctuates as we move from node Class B
, to node Class C
. To compute such quantities, we first need to define the
distance between nodes.
Assume we have a (parent) node
We also define
as expected. Now that we know how to calculate the
distance between a node and its parent or child, let
-
$N_0 = X$ , -
$N_n=Y$ , -
$N_i$ is a direct relative of$N_{i+1}$ , i.e.,$N_i$ is either a child or parent of$N_{i+1}$ , -
$N_i \neq N_j$ for$i\neq j$ .
Then, the distance between
We define the expression
of Gene G
at a node Gene G
considering all the cells that belong to node
we can compute the corresponding gene expression sequence
Lastly, since we are interested in plotting the
gene expression as a function of the distance with respect to
the node
- The sequence of nodes between
$X$ and$Y$ $${(N_{i})}_{i=0}^{n}$$ - The sequence of gene expression levels between
$X$ and$Y$ $${(E_{i})}_{i=0}^{n}$$ - And the sequence of distances with respect to node
$X$ $${(D_{i})}_{i=0}^{n}$$
The final plot is simply
Note how the expression of Gene A
is high relative to
that of Gene B
at node Gene B
is
highly expressed relative to Gene A
at node
I would like to thank the Schwartz lab (GW) for letting me explore different directions and also Christie Lau for providing multiple test cases to improve this implementation.