As part of the ipyrad.analysis
toolkit we've created convenience functions for easily distributing STRUCTURE analysis jobs on an HPC cluster, and for doing so in a programmatic and reproducible way. Importantly, our workflow allows you to easily sample different distributions of unlinked SNPs among replicate analyses, with the final inferred population structure summarized from a distribution of replicates. We also provide some simple interactive plotting functions to make barplots and slightly fancier figure, like below.
This is a Jupyter notebook, a reproducible and executable document. The code in this notebook is Python (2.7), and should be executed either in a jupyter-notebook, like this one, or in an IPython terminal. Execute each cell in order to reproduce our entire analysis. We make use of the ipyparallel
Python library to distribute STRUCTURE jobs across processers in parallel. If that is confusing, see our tutorial on using ipcluster with jupyter. The example data set used in this analysis is from the empirical example ipyrad tutorial.
You can easily install the required software for this notebook locally using conda
by running the commented code below in a terminal. If you are working on an HPC cluster you do not need administrator privileges to install the software in this way, since it is only installed locally.
## conda install ipyrad -c ipyrad
## conda install structure -c ipyrad
## conda install clumpp -c ipyrad
## conda install toytree -c eaton-lab
import ipyrad.analysis as ipa ## ipyrad analysis toolkit
import ipyparallel as ipp ## parallel processing
import toyplot ## plotting library
Start an ipcluster
instance in a separate terminal. An easy way to do this in a jupyter-notebook running on an HPC cluster is to go to your Jupyter dashboard, and click [new], and then [terminal], and run 'ipcluster start
' in that terminal. This will start a local cluster on the compute node you are connected to. See our [ipyparallel tutorial] (coming soon) for further details.
##
## ipcluster start --n=40
##
## get parallel client
ipyclient = ipp.Client()
print "Connected to {} cores".format(len(ipyclient))
## the structure formatted file
strfile = "./erisor.str"
## an optional mapfile, to sample unlinked SNPs
mapfile = "./erisor.snps.map"
## the directory where outfiles should be written
workdir = "./"
Structure is kind of an old fashioned program that requires creating quite a few input files to run, which makes it not very convenient to use in a programmatic and reproducible way. To work around this we've created a convenience wrapper object to make it easy to submit Structure jobs and to summarize their results.
## create a Structure object
struct = ipa.structure(name="structure-test",
data=strfile,
mapfile=mapfile,
workdir=workdir)
Our Structure object will be used to submit jobs to the cluster. It has associated with it a name, a set of input files, and a large number of parameter settings. You can modify the parameters by setting them like below. You can also use tab-completion to see all of the available options, or print them like below. See the full structure docs here for further details on the function of each parameter. In support of reproducibility, it is good practice to print both the mainparams and extraparams so it is clear which options you used.
## set mainparams for object
struct.mainparams.burnin = 10000
struct.mainparams.datatype = 0
struct.mainparams.numreps = 100000
struct.extraparams.ancestdist = 1
## see all mainparams
print struct.mainparams
## see or set extraparams
print struct.extraparams
The function run()
distributes jobs to run on the cluster and load-balances the parallel workload. It takes a number of arguments. The first, kpop
, is the number of populations. The second, nreps
, is the number of replicated runs to perform. Each rep has a different random seed, and if you entered a mapfile for your Structure object then it will subsample unlinked snps independently in each replicate. The seed
argument can be used to make the replicate analyses reproducible. The extraparams.seed
parameter will be generated from this for each replicate. And finally, provide it the ipyclient
object that we created above. The structure object will store an asynchronous results object for each job that is submitted so that we can query whether the jobs are finished yet or not. Using a simple for-loop we'll submit 20 replicate jobs to run at four different values of K.
## a range of K-values to test
tests = [2,3,4,5,6]
## submit batches of 20 replicate jobs for each value of K
for kpop in tests:
struct.run(
kpop=kpop,
nreps=20,
seed=12345,
ipyclient=ipyclient,
)
You can check for finished results by using the get_clumpp_table()
function, which tries to summarize the finished results files. If no results are ready it will simply print a warning message telling you to wait. If you want the notebook to block/wait until all jobs are finished then execute the wait()
function of the ipyclient object, like below.
## see submitted jobs (we query first 10 here)
struct.asyncs[:10]
## query a specific job result by index
if struct.asyncs[20].ready():
print struct.asyncs[20].result()
## block/wait until all jobs finished
ipyclient.wait()
We ran 20 replicates per K-value hypothesis. We now need to concatenate and purmute those results so they can be summarized. For this we use the software clumpp. The default arguments to clumpp are generally good, but you can modify them the same as structure params, by accessing the .clumppparams
attribute of your structure object. See the clumpp documentation for more details. If you have a large number of samples (>50) you may wish to use the largeKgreedy
algorithm (m=3) for faster runtimes. Below we run clumpp for each value of K that we ran structure on. You only need to tell the get_clumpp_table()
function the value of K and it will find all of the result files given the Structure object's name
and workdir
.
## set some clumpp params
struct.clumppparams.m = 3 ## use largegreedy algorithm
struct.clumppparams.greedy_option = 2 ## test nrepeat possible orders
struct.clumppparams.repeats = 10000 ## number of repeats
struct.clumppparams.print_every_perm = 1
struct.clumppparams
## run clumpp for each value of K
tables = {}
for kpop in tests:
tables[kpop] = struct.get_clumpp_table(kpop)
This can be useful if, for example, you want to order the names to be in the same order as tips on your phylogeny.
## custom sorting order
myorder = [
"p_001s_02",
"p_001s_03",
"p_001s_09",
"p_001s_12",
"p_001s_13",
"p_001s_14",
"p_001s_16",
"p_002.5s_01",
"p_002.5s_04",
"p_002.5s_05",
"p_002.5s_07",
"p_002.5s_09",
"p_002.5s_10",
"p_002s_01",
"p_002s_03",
"p_002s_05",
"p_002s_06",
"p_002s_08",
"p_002s_09",
"p_002s_10",
"p_002s_12",
"p_002s_13",
"p_002s_14",
"p_002s_16",
"p_031s_02",
"p_031s_04",
"p_031s_05",
"p_031s_07",
"p_031s_08",
"p_031s_11",
"p_031s_14",
"p_029s_02",
"p_029s_03",
"p_029s_04",
"p_029s_05",
"p_029s_09",
"p_029s_10",
"p_004.5s_011",
"p_004.5s_05",
"p_004.5s_07",
"p_004.5s_08",
"p_004.5s_12",
"p_004.5s_15",
"p_004s_04",
"p_005s_02",
"p_005s_04",
"p_005s_06",
"p_005s_07",
"p_005s_11",
"p_006s_01",
"p_006s_04",
"p_006s_06",
"p_006s_10",
"p_006s_12",
"p_006s_14",
"p_007s_01",
"p_007s_02",
"p_007s_04",
"p_007s_06",
"p_007s_07",
"p_007s_09",
"p_008.5s_02",
"p_009s_01",
"p_009s_09",
"p_009s_12",
"p_009s_13",
"p_009s_14",
"p_009s_15",
"p_010s_02",
"p_010s_06",
"p_010s_11",
"p_011s_06",
"p_011s_08",
"p_011s_10",
"p_011s_11",
"p_012s_01",
"p_012s_06",
"p_012s_08",
"p_012s_12",
"p_012s_14",
"p_013s_02",
"p_013s_08",
"p_013s_12",
"p_013s_13",
"p_013s_14",
"p_014s_01",
"p_014s_02",
"p_014s_03",
"p_014s_11",
"p_014s_13",
"p_014s_15",
"p_015s_09",
"p_015s_13",
"p_015s_14",
"p_016s_01",
"p_016s_02",
"p_016s_03",
"p_016s_04",
"p_016s_06",
"p_016s_07",
"p_016s_14",
"p_016s_15",
"p_017s_01",
"p_017s_03",
"p_017s_05",
"p_017s_06",
"p_017s_07",
"p_018s_02",
"p_018s_06",
"p_018s_09",
"p_018s_14",
"p_019s_01",
"p_019s_03",
"p_019s_12",
"p_020s_01",
"p_020s_05",
"p_020s_11",
"p_020s_13",
"p_021s_01",
"p_021s_05",
"p_021s_08",
"p_021s_09",
"p_021s_12",
"p_021s_14",
"p_023s_03",
"p_024s_01",
"p_024s_06",
"p_024s_11",
"p_024s_12",
"p_025s_13",
"p_026s_04",
"p_026s_05",
"p_026s_12",
"p_026s_14",
"p_027s_01",
"p_027s_03",
"p_027s_05",
"p_027s_06",
"p_027s_12",
"p_030s_01",
"p_030s_02",
"p_030s_04",
"p_030s_05",
"p_030s_06",
"p_030s_07",
"p_030s_11",
"p_032s_05",
"p_1027s_02",
"p_1027s_04",
"p_1027s_08",
"p_1027s_09",
"p_1027s_10",
"p_1027s_12",
"p_1027s_18",
"p_1027s_20",
]
print "custom ordering"
print tables[2].ix[myorder]
print tables[2]
The function automatically parses the table above for you. It can reorder the individuals based on their membership in each group, or based on an input list of ordered names. It returns the table of data as well as a list with information for making interactive hover boxes, which you can see below by hovering over the plots.
def hover(table):
hover = []
for row in range(table.shape[0]):
stack = []
for col in range(table.shape[1]):
label = "Name: {}\nGroup: {}\nProp: {}"\
.format(table.index[row],
table.columns[col],
table.ix[row, col])
stack.append(label)
hover.append(stack)
return list(hover)
Hover over the plot to see sample names and info in the hover box.
for kpop in tests:
## parse outfile to a table and re-order it
table = tables[kpop]
table = table.ix[myorder]
## plot barplot w/ hover
canvas, axes, mark = toyplot.bars(
table,
title=hover(table),
width=2000,
height=400,
yshow=False,
style={"stroke": toyplot.color.near_black},
)
## save plots for your favorite value of K
table = struct.get_clumpp_table(kpop=2)
table = table.ix[myorder]
## further styling of plot with css
style = {"stroke":toyplot.color.near_black,
"stroke-width": 2}
## build barplot
canvas = toyplot.Canvas(width=1200, height=500)
axes = canvas.cartesian(bounds=("5%", "95%", "5%", "45%"))
axes.bars(table, title=hover(table), style=style)
## add names to x-axis
ticklabels = [i for i in table.index.tolist()]
axes.x.ticks.labels.angle = 0
axes.x.ticks.show = True
axes.x.ticks.labels.offset = 10
axes.x.ticks.labels.style = {"font-size": "22px"}
axes.x.ticks.labels.style = {"font-family": "arial"}
axes.x.spine.style = style
axes.y.show = False
## options: uncomment to save plots. Only html retains hover.
import toyplot.svg
import toyplot.pdf
import toyplot.html
#toyplot.svg.render(canvas, "struct.svg")
toyplot.pdf.render(canvas, "erisor.pdf")
#toyplot.html.render(canvas, "struct.html")
## show in notebook
canvas
I haven't gotten around to writing the code for this yet (contributors are welcome!). For now, I like using the site http://taylor0.biology.ucla.edu/structureHarvester/. It's great. Super easy. Zip up all the files in our structure directory, submit them to the site, and you're done.
%%bash -s "$STRUCTDIR"
## creates zip dir of all files ending with _f
zip structure-files-K2and3.zip *_f
You can easily copy this notebook and then just replace my file names with your filenames to run your analysis. Just click on the [Download Notebook] link at the top of this page. Then run jupyter-notebook
from a terminal and open this notebook from the dashboard.