Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
94 changes: 79 additions & 15 deletions Readme.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,21 @@ SURPI
v1.0.7
April 2014

Note: For the most up to date version of the SURPI source code, go to
this website: http://chiulab.ucsf.edu/surpi.

Note: For the most up to date version of the SURPI source code, go to this website: http://chiulab.ucsf.edu/surpi.

SURPI has been tested on Ubuntu 12.04. It will likely function properly on other Linux distributions, but this has not been tested.
SURPI has been tested on Ubuntu 12.04. It will likely function
properly on other Linux distributions, but this has not been tested.

Hardware Requirements

SURPI requires a machine with high RAM in order to run efficiently. This is mainly due to SNAP, which gains its speed by loading the reference databases completely into RAM. We’ve run SURPI successfully on machines with 60.5GB RAM. SURPI will use all cores on a machine by default, though the number of cores used can be adjusted within the config file. Much of SURPI is parallelized, so it benefits from using as many cores as possible.
SURPI requires a machine with high RAM in order to run efficiently.
This is mainly due to SNAP, which gains its speed by loading the
reference databases completely into RAM. We’ve run SURPI
successfully on machines with 60.5GB RAM. SURPI will use all cores
on a machine by default, though the number of cores used can be
adjusted within the config file. Much of SURPI is parallelized, so
it benefits from using as many cores as possible.


The steps to install SURPI on a machine are as follows:
Expand Down Expand Up @@ -59,8 +66,43 @@ The steps to install SURPI on a machine are as follows:

The content and creation of the SNAP databases is documented in the paper in the Reference Databases section, which is duplicated below:

A 3.1 gigabase (Gb) human nucleotide database (human DB) was constructed from a combination of human genomic DNA (GRCh37 / hg19), rRNA (RefSeq), mRNA (RefSeq), and mitochondrial RNA (RefSeq) sequences in NCBI as of March of 2012. The bacterial nucleotide, viral nucleotide, and viral protein databases used by SURPI in fast mode (bacterial DB, viral nucleotide DB, and viral protein DB, respectively) were also constructed from sequences in NCBI as of March of 2012. The 3 Gb bacterial DB was constructed from all bacterial RefSeq entries and consisted of 348,922 unique accessioned sequences, each with a minimum length of 100 bp. The 1.4 Gb viral nucleotide DB included 1,193,607 entries and was constructed by searching for all viral sequences in the 42 Gb National Center for Biotechnology Information (NCBI) nt collection using the query term “viridae[Organism]” in BioPython. The viral protein DB was similarly constructed by extracting viral sequences from the NCBI nr DB collection. Index tables for SNAP (v0.15) were generated with an empirically determined default seed size of 20 for the human DB and viral nucleotide DB, and seed size of 16 for the bacterial DB. Index tables for RAPSearch (v2.09) were generated from the viral protein DB using default parameters.
To generate the National Center for Biotechnology Information (NCBI) nucleotide (nt) collection (NCBI nt DB) used by SURPI in comprehensive mode, the complete 42 Gb nucleotide collection (nt) was downloaded from NCBI in January of 2013. This collection consists of a comprehensive archive of sequences from multiple sources, including GenBank, European Molecular Biology Laboratory (EMBL), DNA Data Bank of Japan (DDBJ), and Protein Data Bank (PDB), and is the richest collection of annotated microbial sequence data publicly available. As SNAP uses 32-bit offsets in the reference genome during hashing, the aligner restricts the size of the reference genome to an absolute maximum of 2^32 bases, or ~4.2 Gb. Thus, the 42 Gb NCBI nt collection was first split into 29 sub-databases, each approximately 1.5 Gb in size. Each sub-database was then indexed separately by SNAP at default parameters with a seed size of 20. This generated 29 SNAP indexed databases, each approximately 27 GB in size, with the aggregate of all 29 databases referred to as the NCBI nt DB.
A 3.1 gigabase (Gb) human nucleotide database (human DB) was
constructed from a combination of human genomic DNA (GRCh37 / hg19),
rRNA (RefSeq), mRNA (RefSeq), and mitochondrial RNA (RefSeq) sequences
in NCBI as of March of 2012. The bacterial nucleotide, viral
nucleotide, and viral protein databases used by SURPI in fast mode
(bacterial DB, viral nucleotide DB, and viral protein DB,
respectively) were also constructed from sequences in NCBI as of March
of 2012. The 3 Gb bacterial DB was constructed from all bacterial
RefSeq entries and consisted of 348,922 unique accessioned sequences,
each with a minimum length of 100 bp. The 1.4 Gb viral nucleotide DB
included 1,193,607 entries and was constructed by searching for all
viral sequences in the 42 Gb National Center for Biotechnology
Information (NCBI) nt collection using the query term
“viridae[Organism]” in BioPython. The viral protein DB was similarly
constructed by extracting viral sequences from the NCBI nr DB
collection. Index tables for SNAP (v0.15) were generated with an
empirically determined default seed size of 20 for the human DB and
viral nucleotide DB, and seed size of 16 for the bacterial DB. Index
tables for RAPSearch (v2.09) were generated from the viral protein DB
using default parameters.

To generate the National Center for Biotechnology Information (NCBI)
nucleotide (nt) collection (NCBI nt DB) used by SURPI in comprehensive
mode, the complete 42 Gb nucleotide collection (nt) was downloaded
from NCBI in January of 2013. This collection consists of a
comprehensive archive of sequences from multiple sources, including
GenBank, European Molecular Biology Laboratory (EMBL), DNA Data Bank
of Japan (DDBJ), and Protein Data Bank (PDB), and is the richest
collection of annotated microbial sequence data publicly available. As
SNAP uses 32-bit offsets in the reference genome during hashing, the
aligner restricts the size of the reference genome to an absolute
maximum of 2^32 bases, or ~4.2 Gb. Thus, the 42 Gb NCBI nt collection
was first split into 29 sub-databases, each approximately 1.5 Gb in
size. Each sub-database was then indexed separately by SNAP at default
parameters with a seed size of 20. This generated 29 SNAP indexed
databases, each approximately 27 GB in size, with the aggregate of all
29 databases referred to as the NCBI nt DB.


SNAP Databases
Expand All @@ -74,19 +116,24 @@ To generate the National Center for Biotechnology Information (NCBI) nucleotide
• NCBI nt DB

Fast Mode

• Viral nt DB
• Bacterial DB

Taxonomy Databases

These databases can be created using the shell script titled: create_taxonomy_db.sh. Depending on your internet connection speed, and the speed of your system, this script may take several hours to complete creation of the database.
These databases can be created using the shell script titled:
create_taxonomy_db.sh. Depending on your internet connection speed,
and the speed of your system, this script may take several hours to
complete creation of the database.

• gi_taxid_prot.db
• gi_taxid_nucl.db
• names_nodes_scientific.db

To use this script, execute the create_taxonomy_db.sh program. This script will download the necessary data from NCBI, and generate the above 3 databases.
To use this script, execute the create_taxonomy_db.sh program. This
script will download the necessary data from NCBI, and generate the
above 3 databases.


4. SURPI file customization
Expand All @@ -95,15 +142,21 @@ To generate the National Center for Biotechnology Information (NCBI) nucleotide

• cutadapt_quality.csh - specify location of /tmp folder

cutadapt_quality.csh defaults to using /tmp for temporary file storage. If using a system with limited space in this location, change the location to a directory with more storage space available.
cutadapt_quality.csh defaults to using /tmp for temporary file
storage. If using a system with limited space in this location,
change the location to a directory with more storage space
available.

• taxonomy_lookup_embedded.pl

Set database_directory to the location of the taxonomy databases created below.

• tweet.pl

SURPI has the ability to send out notifications via Twitter at various stages within the pipeline. If this feature is desired, you will need to set up a Twitter application within your account for this purpose. See https://dev.twitter.com/apps for more details.
SURPI has the ability to send out notifications via Twitter at
various stages within the pipeline. If this feature is desired, you
will need to set up a Twitter application within your account for
this purpose. See https://dev.twitter.com/apps for more details.

Once an application has been set up, fill in the below parameters to the tweet.pl program.
consumer_key
Expand All @@ -124,15 +177,22 @@ To generate the National Center for Biotechnology Information (NCBI) nucleotide

SURPI.sh -z <INPUTFILE>

After typing the above line, a config file and a “go” file will be created. The config file will contain default values for many parameters - these parameters may need to be modified depending on your environment. The config file has descriptions of the options allowed by SURPI.
After typing the above line, a config file and a “go” file will be
created. The config file will contain default values for many
parameters - these parameters may need to be modified depending on
your environment. The config file has descriptions of the options
allowed by SURPI.

2. Once the config file has been customized, the SURPI pipeline can be initiated by typing in the name of the go file that was created. Below is an example (boldfaced text is inputted by the user):
2. Once the config file has been customized, the SURPI pipeline can
be initiated by typing in the name of the go file that was created.
Below is an example (boldfaced text is inputted by the user):

sfederman@tribble:/data/inputfile/test$ ls -laF
total 750212
drwxrwxr-x 2 sfederman sfederman 4096 Jan 20 16:45 ./
drwxrwxr-x 11 sfederman sfederman 61440 Jan 20 16:45 ../
-rw-rw-r-- 1 sfederman sfederman 768143660 Jan 20 16:45 inputfile.fastq

sfederman@tribble:/data/inputfile/test$ SURPI.sh -z inputfile.fastq

inputfile.config generated. Please edit it to contain the proper parameters for your analysis.
Expand All @@ -145,6 +205,10 @@ drwxrwxr-x 11 sfederman sfederman 61440 Jan 20 16:45 ../
-rw-rw-r-- 1 sfederman sfederman 1976 Jan 20 16:47 inputfile.config
-rw-rw-r-- 1 sfederman sfederman 768143660 Jan 20 16:45 inputfile.fastq
-rwxrwxr-x 1 sfederman sfederman 84 Jan 20 16:47 go_inputfile*

sfederman@tribble:/data/inputfile/test$ ./go_inputfile &

Progression of the pipeline can be followed by monitoring the log file (titled inputfile.SURPI.log, in the above example). We have also find it useful to monitor the status of the pipeline with the program htop.
Progression of the pipeline can be followed by monitoring the log
file (titled inputfile.SURPI.log, in the above example). We have
also find it useful to monitor the status of the pipeline with the
program htop.