What it does
The Gumbel method can be used to determine which genes are essential in a single condition. It does a gene-by-gene analysis of the insertions at TA sites with each gene, makes a call based on the longest consecutive sequence of TA sites without insertion in the genes, calculates the probability of this using a Bayesian model.
Note : Intended only for Himar1 datasets.
Inputs
Input files for Gumnbel need to be:
- .wig files: Tabulated files containing one column with the TA site coordinate and one column with the read count at this site.
- annotation .prot_table: Annotation file generated by the Convert Gff3 to prot_table for TRANSIT tool.
Parameters
- Optional Arguments:
-s <integer> |
= Number of samples. Default: |
| -s 10000 |
|
-b <integer> |
:= Number of Burn-in samples. Default -b 500 |
-m <integer> |
= Smallest read-count to consider. Default: |
| -m 1 |
|
-t <integer> |
= Trims all but every t-th value. Default: |
| -t 1 |
|
-r <string> |
= How to handle replicates. Sum or Mean. Default: |
| -r Mean |
|
--iN <float> |
= Ignore TAs occuring at given fraction of the N terminus. Default: |
| -iN 0.0 |
|
--iC <float> |
= Ignore TAs occuring at given fraction of the C terminus. Default: |
| -iC 0.0 |
|
-n <string> |
:= Determines which normalization method to use. Default -n TTR |
- Samples: Gumbel uses Metropolis-Hastings (MH) to generate samples of posterior distributions. The default setting is to run the simulation for 10,000 iterations. This is usually enough to assure convergence of the sampler and to provide accurate estimates of posterior probabilities. Less iterations may work, but at the risk of lower accuracy.
- Burn-In: Because the MH sampler many not have stabilized in the first few iterations, a “burn-in” period is defined. Samples obtained in this “burn-in” period are discarded, and do not count towards estimates.
- Trim: The MH sampler produces Markov samples that are correlated. This parameter dictates how many samples must be attempted for every sampled obtained. Increasing this parameter will decrease the auto-correlation, at the cost of dramatically increasing the run-time. For most situations, this parameter should be left at the default of “1”.
- Minimum Read: The minimum read count that is considered a true read. Because the Gumbel method depends on determining gaps of TA sites lacking insertions, it may be susceptible to spurious reads (e.g. errors). The default value of 1 will consider all reads as true reads. A value of 2, for example, will ignore read counts of 1.
- Replicates: Determines how to deal with replicates by averaging the read-counts or summing read counts across datasets. This should not have an affect for the Gumbel method, aside from potentially affecting spurious reads.
- Normalisation :
- TTR (Default) : Trimmed Total Reads (TTR), normalized by the total read-counts (like totreads), but trims top and bottom 5% of read-counts. This is the recommended normalization method for most cases as it has the beneffit of normalizing for difference in saturation in the context of resampling.
- nzmean : Normalizes datasets to have the same mean over the non-zero sites.
- totreads : Normalizes datasets by total read-counts, and scales them to have the same mean over all counts.
- zinfnb : Fits a zero-inflated negative binomial model, and then divides read-counts by the mean. The zero-inflated negative binomial model will treat some empty sites as belonging to the “true” negative binomial distribution responsible for read-counts while treating the others as “essential” (and thus not influencing its parameters).
- quantile : Normalizes datasets using the quantile normalization method described by Bolstad et al. (2003). In this normalization procedure, datasets are sorted, an empirical distribution is estimated as the mean across the sorted datasets at each site, and then the original (unsorted) datasets are assigned values from the empirical distribution based on their quantiles.
- betageom : Normalizes the datasets to fit an “ideal” Geometric distribution with a variable probability parameter p. Specially useful for datasets that contain a large skew. See Beta-Geometric Correction .
- nonorm : No normalization is performed.
Outputs
Column Header |
Column Definition |
Orf |
Gene ID |
Name |
Gene Name |
Desc |
Gene Description |
k |
Number of Transposon Insertions Observed within the ORF. |
n |
Total Number of TA dinucleotides within the ORF. |
r |
Span of nucleotides for the Maximum Run of Non-Insertions. |
s |
Span of nucleotides for the Maximum Run of Non-Insertions. |
zbar |
Posterior Probability of Essentiality. |
State Call |
Essentiality call for the gene. Depends on FDR corrected thresholds. E=Essential U=Uncertain, NE=Non-Essential, S=too short |
Note: Technically, Bayesian models are used to calculate posterior probabilities, not p-values (which is a concept associated with the frequentist framework). However, we have implemented a method for computing the approximate false-discovery rate (FDR) that serves a similar purpose. This determines a threshold for significance on the posterior probabilities that is corrected for multiple tests. The actual thresholds used are reported in the headers of the output file (and are near 1 for essentials and near 0 for non-essentials). There can be many genes that score between the two thresholds (t1 < zbar < t2). This reflects intrinsic uncertainty associated with either low read counts, sparse insertion density, or small genes. If the insertion_density is too low (< ~30%), the method may not work as well, and might indicate an unusually large number of Uncertain or Essential genes.
More Information
See TRANSIT documentation