Last updated: 2020-06-17

Checks: 7 0

Knit directory: analysis_pipelines/

This reproducible R Markdown analysis was created with workflowr (version 1.6.2). The Checks tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.


Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.

Great job! The global environment was empty. Objects defined in the global environment can affect the analysis in your R Markdown file in unknown ways. For reproduciblity it’s best to always run the code in an empty environment.

The command set.seed(20200524) was run prior to running the code in the R Markdown file. Setting a seed ensures that any results that rely on randomness, e.g. subsampling or permutations, are reproducible.

Great job! Recording the operating system, R version, and package versions is critical for reproducibility.

Nice! There were no cached chunks for this analysis, so you can be confident that you successfully produced the results during this run.

Great job! Using relative paths to the files within your workflowr project makes it easier to run your code on other machines.

Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility.

The results in this page were generated with repository version fe7a9a7. See the Past versions tab to see a history of the changes made to the R Markdown and HTML files.

Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:


Ignored files:
    Ignored:    .Rhistory
    Ignored:    .Rproj.user/

Unstaged changes:
    Modified:   analysis/sldsc_pipeline.Rmd

Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.


These are the previous versions of the repository in which changes were made to the R Markdown (analysis/test_sldsc_example.Rmd) and HTML (docs/test_sldsc_example.html) files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view the files as they were in that past version.

File Version Author Date Message
Rmd fe7a9a7 kevinlkx 2020-06-17 wflow_publish(“analysis/test_sldsc_example.Rmd”)
html 23551ee kevinlkx 2020-06-17 Build site.
Rmd b1239f4 kevinlkx 2020-06-17 wflow_publish(“analysis/test_sldsc_example.Rmd”)
html 340a726 kevinlkx 2020-06-17 Build site.
Rmd d96c329 kevinlkx 2020-06-17 wflow_publish(“analysis/test_sldsc_example.Rmd”)

Test the example written in ldsc wiki for “Partitioned-Heritability”

Install LD Score Regression (LDSC) software

Please see the detail instructions: LD Score Regression (LDSC) https://github.com/bulik/ldsc

Download ldsc software

git clone https://github.com/bulik/ldsc.git
cd ldsc

Create a conda environment with LDSC’s dependencies

You might need to update numpy (and other packages) to a newer version (e.g. set numpy==1.16 or newer version)

conda env create --file environment.yml

Activate the conda environment with LDSC’s dependencies

conda activate ldsc

Test installation

If these commands fail with an error, then something as gone wrong during the installation process.

cd ldsc

python ./ldsc.py -h
python ./munge_sumstats.py -h

Download example data

Download the baseline model LD scores

wget https://data.broadinstitute.org/alkesgroup/LDSCORE/1000G_Phase1_baseline_ldscores.tgz
tar -xvzf 1000G_Phase1_baseline_ldscores.tgz

Download regression weights

wget https://data.broadinstitute.org/alkesgroup/LDSCORE/weights_hm3_no_hla.tgz
tar -xvzf weights_hm3_no_hla.tgz

Download allele frequencies (European of Phase 3 of 1000 Genomes)

wget https://data.broadinstitute.org/alkesgroup/LDSCORE/1000G_Phase1_frq.tgz
tar -xvzf 1000G_Phase1_frq.tgz

Download a list of HapMap3 SNPs

wget https://data.broadinstitute.org/alkesgroup/LDSCORE/w_hm3.snplist.bz2
bzip2 -d w_hm3.snplist.bz2

Download GIANT BMI GWAS summary statistics

wget http://portals.broadinstitute.org/collaboration/giant/images/b/b7/GIANT_BMI_Speliotes2010_publicrelease_HapMapCeuFreq.txt.gz
gunzip GIANT_BMI_Speliotes2010_publicrelease_HapMapCeuFreq.txt.gz

Partition heritability

Prepare GWAS summary stats in LDSC .sumstats format

  • Convert GWAS summary stats to the LDSC .sumstats format using munge_sumstats.py
  • ldsc wiki “Summary-Statistics-File-Format”
  • Note: you may need to add an option --chunksize 500000 to munge_sumstats.py command
python munge_sumstats.py \
--sumstats GIANT_BMI_Speliotes2010_publicrelease_HapMapCeuFreq.txt \
--merge-alleles w_hm3.snplist \
--out BMI \
--a1-inc \
--chunksize 500000

Run S-LDSC on BMI GWAS summary statistics using baseline annotations

  • Joint model: you could include multiple annotations file prefixes to run multiple annotations jointly
python ldsc.py \
--h2 BMI.sumstats.gz \
--ref-ld-chr baseline/baseline. \
--w-ld-chr weights_hm3_no_hla/weights. \
--overlap-annot \
--frqfile-chr 1000G_frq/1000G.mac5eur. \
--out BMI_baseline

sessionInfo()
R version 3.5.1 (2018-07-02)
Platform: x86_64-pc-linux-gnu (64-bit)
Running under: Scientific Linux 7.4 (Nitrogen)

Matrix products: default
BLAS/LAPACK: /software/openblas-0.2.19-el7-x86_64/lib/libopenblas_haswellp-r0.2.19.so

locale:
 [1] LC_CTYPE=en_US.UTF-8       LC_NUMERIC=C              
 [3] LC_TIME=en_US.UTF-8        LC_COLLATE=en_US.UTF-8    
 [5] LC_MONETARY=en_US.UTF-8    LC_MESSAGES=en_US.UTF-8   
 [7] LC_PAPER=en_US.UTF-8       LC_NAME=C                 
 [9] LC_ADDRESS=C               LC_TELEPHONE=C            
[11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C       

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] workflowr_1.6.2

loaded via a namespace (and not attached):
 [1] Rcpp_1.0.4.6    rprojroot_1.3-2 digest_0.6.25   later_1.0.0    
 [5] R6_2.4.1        backports_1.1.7 git2r_0.27.1    magrittr_1.5   
 [9] evaluate_0.14   stringi_1.4.6   rlang_0.4.6     fs_1.3.1       
[13] promises_1.1.0  whisker_0.4     rmarkdown_2.1   tools_3.5.1    
[17] stringr_1.4.0   glue_1.4.1      httpuv_1.5.3.1  xfun_0.14      
[21] yaml_2.2.0      compiler_3.5.1  htmltools_0.4.0 knitr_1.28