# HG changeset patch
# User fubar
# Date 1409207808 14400
# Node ID 2202872ebbe8ed4dbbb982e297f24e1aac073215
# Parent 117a5ada6a6a4a4167e6748b86cf938f526686e6
Uploaded
diff -r 117a5ada6a6a -r 2202872ebbe8 README.txt
--- a/README.txt Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,266 +0,0 @@
-# WARNING before you start
-# Install this tool on a private Galaxy ONLY
-# Please NEVER on a public or production instance
-# updated august 8 2014 to fix bugs reported by Marius van den Beek
-Please cite:
-http://bioinformatics.oxfordjournals.org/cgi/reprint/bts573?ijkey=lczQh1sWrMwdYWJ&keytype=ref
-if you use this tool in your published work.
-
-*Short Story*
-
-This is an unusual Galaxy tool that exposes unrestricted and therefore extremely dangerous
-scripting to designated administrative users of a Galaxy server, allowing them to run scripts
-in R, python, sh and perl over a single input data set, writing a single new data set as output.
-
-In addition, this tool optionally generates very simple new Galaxy tools, that effectively
-freeze the supplied script into a new, ordinary Galaxy tool that runs it over a single input file,
-working just like any other Galaxy tool for your users.
-
-If you use the Html output option, please ensure that sanitize_all_html is set to False and
-uncommented in universe_wsgi.ini - it should show:
-
-# By default, all tool output served as 'text/html' will be sanitized
-sanitize_all_html = False
-
-*More Detail*
-
-To use the ToolFactory, you should have prepared a script to paste into a text box,
-and a small test input example ready to select from your history to test your new script.
-There is an example in each scripting language on the Tool Factory form. You can just
-cut and paste these to try it out - remember to select the right interpreter please. You'll
-also need to create a small test data set using the Galaxy history add new data tool.
-
-If the script fails somehow, use the "redo" button on the tool output in your history to
-recreate the form complete with broken script. Fix the bug and execute again. Rinse, wash, repeat.
-
-Once the script runs sucessfully, a new Galaxy tool that runs your script can be generated.
-Select the "generate" option and supply some help text and names. The new tool will be
-generated in the form of a new Galaxy datatype - toolshed.gz - as the name suggests,
-it's an archive ready to upload to a Galaxy ToolShed as a new tool repository.
-
-Once it's in a ToolShed, it can be installed into any local Galaxy server from
-the server administrative interface.
-
-Once the new tool is installed, local users can run it - each time, the script that was supplied
-when it was built will be executed with the input chosen from the user's history. In other words,
-the tools you generate with the ToolFactory run just like any other Galaxy tool,
-but run your script every time.
-
-Tool factory tools are perfect for workflow components. One input, one output, no variables.
-
-*To fully and safely exploit the awesome power* of this tool, Galaxy and the ToolShed,
-you should be a developer installing this tool on a private/personal/scratch local instance where you
-are an admin_user. Then, if you break it, you get to keep all the pieces
-see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-
-** Installation **
-This is a Galaxy tool. You can install it most conveniently using the administrative "Search and browse tool sheds" link.
-Find the Galaxy Main toolshed at https://toolshed.g2.bx.psu.edu/ and search for the toolfactory repository.
-Open it and review the code and select the option to install it.
-
-(
-If you can't get the tool that way, the xml and py files here need to be copied into a new tools
-subdirectory such as tools/toolfactory Your tool_conf.xml needs a new entry pointing to the xml
-file - something like::
-
-
-
-
-
-If not already there (I just added it to datatypes_conf.xml.sample), please add:
-
-to your local data_types_conf.xml.
-)
-
-Of course, R, python, perl etc are needed on your path if you want to test scripts using those interpreters.
-Adding new ones to this tool code should be easy enough. Please make suggestions as bitbucket issues and code.
-The HTML file code automatically shrinks R's bloated pdfs, and depends on ghostscript. The thumbnails require imagemagick .
-
-* Restricted execution *
-The new tool factory tool will then be usable ONLY by admin users - people with IDs in admin_users in universe_wsgi.ini
-**Yes, that's right. ONLY admin_users can run this tool** Think about it for a moment. If allowed to run any
-arbitrary script on your Galaxy server, the only thing that would impede a miscreant bent on destroying all your
-Galaxy data would probably be lack of appropriate technical skills.
-
-*What it does* This is a tool factory for simple scripts in python, R and perl currently.
-Functional tests are automatically generated. How cool is that.
-
-LIMITED to simple scripts that read one input from the history.
-Optionally can write one new history dataset,
-and optionally collect any number of outputs into links on an autogenerated HTML
-index page for the user to navigate - useful if the script writes images and output files - pdf outputs
-are shown as thumbnails and R's bloated pdf's are shrunk with ghostscript so that and imagemagik need to
-be avaailable.
-
-Generated tools can be edited and enhanced like any Galaxy tool, so start small and build up since
-a generated script gets you a serious leg up to a more complex one.
-
-*What you do* You paste and run your script
-you fix the syntax errors and eventually it runs
-You can use the redo button and edit the script before
-trying to rerun it as you debug - it works pretty well.
-
-Once the script works on some test data, you can
-generate a toolshed compatible gzip file
-containing your script ready to run as an ordinary Galaxy tool in a
-repository on your local toolshed. That means safe and largely automated installation in any
-production Galaxy configured to use your toolshed.
-
-*Generated tool Security* Once you install a generated tool, it's just
-another tool - assuming the script is safe. They just run normally and their user cannot do anything unusually insecure
-but please, practice safe toolshed.
-Read the fucking code before you install any tool.
-Especially this one - it is really scary.
-
-If you opt for an HTML output, you get all the script outputs arranged
-as a single Html history item - all output files are linked, thumbnails for all the pdfs.
-Ugly but really inexpensive.
-
-Patches and suggestions welcome as bitbucket issues please?
-
-copyright ross lazarus (ross stop lazarus at gmail stop com) May 2012
-
-all rights reserved
-Licensed under the LGPL if you want to improve it, feel free https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-
-Material for our more enthusiastic and voracious readers continues below - we salute you.
-
-**Motivation** Simple transformation, filtering or reporting scripts get written, run and lost every day in most busy labs
-- even ours where Galaxy is in use. This 'dark script matter' is pervasive and generally not reproducible.
-
-**Benefits** For our group, this allows Galaxy to fill that important dark script gap - all those "small" bioinformatics
-tasks. Once a user has a working R (or python or perl) script that does something Galaxy cannot currently do (eg transpose a
-tabular file) and takes parameters the way Galaxy supplies them (see example below), they:
-
-1. Install the tool factory on a personal private instance
-
-2. Upload a small test data set
-
-3. Paste the script into the 'script' text box and iteratively run the insecure tool on test data until it works right -
-there is absolutely no reason to do this anywhere other than on a personal private instance.
-
-4. Once it works right, set the 'Generate toolshed gzip' option and run it again.
-
-5. A toolshed style gzip appears ready to upload and install like any other Toolshed entry.
-
-6. Upload the new tool to the toolshed
-
-7. Ask the local admin to check the new tool to confirm it's not evil and install it in the local production galaxy
-
-**Simple examples on the tool form**
-
-A simple Rscript "filter" showing how the command line parameters can be handled, takes an input file,
-does something (transpose in this case) and writes the results to a new tabular file::
-
- # transpose a tabular input file and write as a tabular output file
- ourargs = commandArgs(TRUE)
- inf = ourargs[1]
- outf = ourargs[2]
- inp = read.table(inf,head=F,row.names=NULL,sep='\t')
- outp = t(inp)
- write.table(outp,outf, quote=FALSE, sep="\t",row.names=F,col.names=F)
-
-Calculate a multiple test adjusted p value from a column of p values - for this script to be useful,
-it needs the right column for the input to be specified in the code for the
-given input file type(s) specified when the tool is generated ::
-
- # use p.adjust - assumes a HEADER row and column 1 - please fix for any real use
- column = 1 # adjust if necessary for some other kind of input
- fdrmeth = 'BH'
- ourargs = commandArgs(TRUE)
- inf = ourargs[1]
- outf = ourargs[2]
- inp = read.table(inf,head=T,row.names=NULL,sep='\t')
- p = inp[,column]
- q = p.adjust(p,method=fdrmeth)
- newval = paste(fdrmeth,'p-value',sep='_')
- q = data.frame(q)
- names(q) = newval
- outp = cbind(inp,newval=q)
- write.table(outp,outf, quote=FALSE, sep="\t",row.names=F,col.names=T)
-
-
-
-Another Rscript example without any input file - generates a random heatmap pdf - you must make sure the option to create an HTML output file is
-turned on for this to work. The heatmap will be presented as a thumbnail linked to the pdf in the resulting HTML page::
-
- # note this script takes NO input or output because it generates random data
- foo = data.frame(a=runif(100),b=runif(100),c=runif(100),d=runif(100),e=runif(100),f=runif(100))
- bar = as.matrix(foo)
- pdf( "heattest.pdf" )
- heatmap(bar,main='Random Heatmap')
- dev.off()
-
-A Python example that reverses each row of a tabular file. You'll need to remove the leading spaces for this to work if cut
-and pasted into the script box. Note that you can already do this in Galaxy by setting up the cut columns tool with the
-correct number of columns in reverse order,but this script will work for any number of columns so is completely generic::
-
-# reverse order of columns in a tabular file
-import sys
-inp = sys.argv[1]
-outp = sys.argv[2]
-i = open(inp,'r')
-o = open(outp,'w')
-for row in i:
- rs = row.rstrip().split('\t')
- rs.reverse()
- o.write('\t'.join(rs))
- o.write('\n')
-i.close()
-o.close()
-
-
-Galaxy as an IDE for developing API scripts
-If you need to develop Galaxy API scripts and you like to live dangerously, please read on.
-
-Galaxy as an IDE?
-Amazingly enough, blend-lib API scripts run perfectly well *inside* Galaxy when pasted into a Tool Factory form. No need to generate a new tool. Galaxy+Tool_Factory = IDE I think we need a new t-shirt. Seriously, it is actually quite useable.
-
-Why bother - what's wrong with Eclipse
-Nothing. But, compared with developing API scripts in the usual way outside Galaxy, you get persistence and other framework benefits plus at absolutely no extra charge, a ginormous security problem if you share the history or any outputs because they contain the api script with key so development servers only please!
-
-Workflow
-Fire up the Tool Factory in Galaxy.
-
-Leave the input box empty, set the interpreter to python, paste and run an api script - eg working example (substitute the url and key) below.
-
-It took me a few iterations to develop the example below because I know almost nothing about the API. I started with very simple code from one of the samples and after each run, the (edited..) api script is conveniently recreated using the redo button on the history output item. So each successive version of the developing api script you run is persisted - ready to be edited and rerun easily. It is ''very'' handy to be able to add a line of code to the script and run it, then view the output to (eg) inspect dicts returned by API calls to help move progressively deeper iteratively.
-
-Give the below a whirl on a private clone (install the tool factory from the main toolshed) and try adding complexity with few rerun/edit/rerun cycles.
-
-Eg tool factory api script
-import sys
-from blend.galaxy import GalaxyInstance
-ourGal = 'http://x.x.x.x:xxxx'
-ourKey = 'xxx'
-gi = GalaxyInstance(ourGal, key=ourKey)
-libs = gi.libraries.get_libraries()
-res = []
-# libs looks like
-# u'url': u'/galaxy/api/libraries/441d8112651dc2f3', u'id': u'441d8112651dc2f3', u'name':.... u'Demonstration sample RNA data',
-for lib in libs:
- res.append('%s:\n' % lib['name'])
- res.append(str(gi.libraries.show_library(lib['id'],contents=True)))
-outf=open(sys.argv[2],'w')
-outf.write('\n'.join(res))
-outf.close()
-
-**Attribution**
-Creating re-usable tools from scripts: The Galaxy Tool Factory
-Ross Lazarus; Antony Kaspi; Mark Ziemann; The Galaxy Team
-Bioinformatics 2012; doi: 10.1093/bioinformatics/bts573
-
-http://bioinformatics.oxfordjournals.org/cgi/reprint/bts573?ijkey=lczQh1sWrMwdYWJ&keytype=ref
-
-**Licensing**
-Copyright Ross Lazarus 2010
-ross lazarus at g mail period com
-
-All rights reserved.
-
-Licensed under the LGPL
-
-**Obligatory screenshot**
-
-http://bitbucket.org/fubar/galaxytoolmaker/src/fda8032fe989/images/dynamicScriptTool.png
-
diff -r 117a5ada6a6a -r 2202872ebbe8 images/dynamicScriptTool.png
Binary file images/dynamicScriptTool.png has changed
diff -r 117a5ada6a6a -r 2202872ebbe8 old.xml
--- a/old.xml Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,835 +0,0 @@
-
- 1 or 2 level models for count data
-
- biocbasics
- package_r3
-
-
-
- rgToolFactory.py --script_path "$runme" --interpreter "Rscript" --tool_name "edgeR"
- --output_dir "$html_file.files_path" --output_html "$html_file" --output_tab "$outtab" --make_HTML "yes"
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- nsamp) {
- dm =dm[1:nsamp,]
- }
- newcolnames = substr(colnames(dm),1,20)
- colnames(dm) = newcolnames
- pdf(outpdfname)
- heatmap.2(dm,main=myTitle,ColSideColors=pcols,col=topo.colors(100),dendrogram="col",key=T,density.info='none',
- Rowv=F,scale='row',trace='none',margins=c(8,8),cexRow=0.4,cexCol=0.5)
- dev.off()
-}
-
-hmap = function(cmat,nmeans=4,outpdfname="heatMap.pdf",nsamp=250,TName='Treatment',group=NA,myTitle="Title goes here")
-{
- ## for 2 groups only was
- ## col.map = function(g) {if (g==TName) "#FF0000" else "#0000FF"}
- ## pcols = unlist(lapply(group,col.map))
- gu = unique(group)
- colours = rainbow(length(gu),start=0.3,end=0.6)
- pcols = colours[match(group,gu)]
- nrows = nrow(cmat)
- mtitle = paste(myTitle,'Heatmap: n contigs =',nrows)
- if (nrows > nsamp) {
- cmat = cmat[c(1:nsamp),]
- mtitle = paste('Heatmap: Top ',nsamp,' DE contigs (of ',nrows,')',sep='')
- }
- newcolnames = substr(colnames(cmat),1,20)
- colnames(cmat) = newcolnames
- pdf(outpdfname)
- heatmap(cmat,scale='row',main=mtitle,cexRow=0.3,cexCol=0.4,Rowv=NA,ColSideColors=pcols)
- dev.off()
-}
-
-qqPlot = function(descr='Title',pvector, ...)
-## stolen from https://gist.github.com/703512
-{
- o = -log10(sort(pvector,decreasing=F))
- e = -log10( 1:length(o)/length(o) )
- o[o==-Inf] = reallysmall
- o[o==Inf] = reallybig
- pdfname = paste(gsub(" ","", descr , fixed=TRUE),'pval_qq.pdf',sep='_')
- maint = paste(descr,'QQ Plot')
- pdf(pdfname)
- plot(e,o,pch=19,cex=1, main=maint, ...,
- xlab=expression(Expected~~-log[10](italic(p))),
- ylab=expression(Observed~~-log[10](italic(p))),
- xlim=c(0,max(e)), ylim=c(0,max(o)))
- lines(e,e,col="red")
- grid(col = "lightgray", lty = "dotted")
- dev.off()
-}
-
-smearPlot = function(DGEList,deTags, outSmear, outMain)
- {
- pdf(outSmear)
- plotSmear(DGEList,de.tags=deTags,main=outMain)
- grid(col="blue")
- dev.off()
- }
-
-boxPlot = function(rawrs,cleanrs,maint,myTitle)
-{
- nc = ncol(rawrs)
- for (i in c(1:nc)) {rawrs[(rawrs[,i] < 0),i] = NA}
- fullnames = colnames(rawrs)
- newcolnames = substr(colnames(rawrs),1,20)
- colnames(rawrs) = newcolnames
- newcolnames = substr(colnames(cleanrs),1,20)
- colnames(cleanrs) = newcolnames
- pdfname = paste(gsub(" ","", myTitle , fixed=TRUE),"sampleBoxplot.pdf",sep='_')
- defpar = par(no.readonly=T)
- pdf(pdfname)
- l = layout(matrix(c(1,2),1,2,byrow=T))
- print.noquote('raw contig counts by sample:')
- print.noquote(summary(rawrs))
- print.noquote('normalised contig counts by sample:')
- print.noquote(summary(cleanrs))
- boxplot(rawrs,varwidth=T,notch=T,ylab='log contig count',col="maroon",las=3,cex.axis=0.35,main=paste('Raw:',maint))
- grid(col="blue")
- boxplot(cleanrs,varwidth=T,notch=T,ylab='log contig count',col="maroon",las=3,cex.axis=0.35,main=paste('After ',maint))
- grid(col="blue")
- dev.off()
- pdfname = paste(gsub(" ","", myTitle , fixed=TRUE),"samplehistplot.pdf",sep='_')
- nc = ncol(rawrs)
- print.noquote(paste('Using ncol rawrs=',nc))
- ncroot = round(sqrt(nc))
- if (ncroot*ncroot < nc) { ncroot = ncroot + 1 }
- m = c()
- for (i in c(1:nc)) {
- rhist = hist(rawrs[,i],breaks=100,plot=F)
- m = append(m,max(rhist\$counts))
- }
- ymax = max(m)
- pdf(pdfname)
- par(mfrow=c(ncroot,ncroot))
- for (i in c(1:nc)) {
- hist(rawrs[,i], main=paste("Contig logcount",i), xlab='log raw count', col="maroon",
- breaks=100,sub=fullnames[i],cex=0.8,ylim=c(0,ymax))
- }
- dev.off()
- par(defpar)
-
-}
-
-cumPlot = function(rawrs,cleanrs,maint,myTitle)
-{
- pdfname = paste(gsub(" ","", myTitle , fixed=TRUE),"RowsumCum.pdf",sep='_')
- defpar = par(no.readonly=T)
- pdf(pdfname)
- par(mfrow=c(2,1))
- lrs = log(rawrs,10)
- lim = max(lrs)
- hist(lrs,breaks=100,main=paste('Before:',maint),xlab="Reads (log)",
- ylab="Count",col="maroon",sub=myTitle, xlim=c(0,lim),las=1)
- grid(col="blue")
- lrs = log(cleanrs,10)
- hist(lrs,breaks=100,main=paste('After:',maint),xlab="Reads (log)",
- ylab="Count",col="maroon",sub=myTitle,xlim=c(0,lim),las=1)
- grid(col="blue")
- dev.off()
- par(defpar)
-}
-
-cumPlot1 = function(rawrs,cleanrs,maint,myTitle)
-{
- pdfname = paste(gsub(" ","", myTitle , fixed=TRUE),"RowsumCum.pdf",sep='_')
- pdf(pdfname)
- par(mfrow=c(2,1))
- lastx = max(rawrs)
- rawe = knots(ecdf(rawrs))
- cleane = knots(ecdf(cleanrs))
- cy = 1:length(cleane)/length(cleane)
- ry = 1:length(rawe)/length(rawe)
- plot(rawe,ry,type='l',main=paste('Before',maint),xlab="Log Contig Total Reads",
- ylab="Cumulative proportion",col="maroon",log='x',xlim=c(1,lastx),sub=myTitle)
- grid(col="blue")
- plot(cleane,cy,type='l',main=paste('After',maint),xlab="Log Contig Total Reads",
- ylab="Cumulative proportion",col="maroon",log='x',xlim=c(1,lastx),sub=myTitle)
- grid(col="blue")
- dev.off()
-}
-
-
-
-doGSEA = function(y=NULL,design=NULL,histgmt="",
- bigmt="/data/genomes/gsea/3.1/Abetterchoice_nocgp_c2_c3_c5_symbols_all.gmt",
- ntest=0, myTitle="myTitle", outfname="GSEA.xls", minnin=5, maxnin=2000,fdrthresh=0.05,fdrtype="BH")
-{
- genesets = c()
- if (bigmt > "")
- {
- bigenesets = readLines(bigmt)
- genesets = bigenesets
- }
- if (histgmt > "")
- {
- hgenesets = readLines(histgmt)
- if (bigmt > "") {
- genesets = rbind(genesets,hgenesets)
- } else {
- genesets = hgenesets
- }
- }
- print.noquote(paste("@@@read",length(genesets), 'genesets from',histgmt,bigmt))
- genesets = strsplit(genesets,'\t')
- ##### tabular. genesetid\tURLorwhatever\tgene_1\t..\tgene_n
- outf = outfname
- head=paste(myTitle,'edgeR GSEA')
- write(head,file=outfname,append=F)
- ntest=length(genesets)
- urownames = toupper(rownames(y))
- upcam = c()
- downcam = c()
- for (i in 1:ntest) {
- gs = unlist(genesets[i])
- g = gs[1] #### geneset_id
- u = gs[2]
- if (u > "") { u = paste("",u,"",sep="") }
- glist = gs[3:length(gs)] #### member gene symbols
- glist = toupper(glist)
- inglist = urownames %in% glist
- nin = sum(inglist)
- if ((nin > minnin) && (nin < maxnin)) {
- ### print(paste('@@found',sum(inglist),'genes in glist'))
- camres = camera(y=y,index=inglist,design=design)
- if (camres) {
- rownames(camres) = g
- ##### gene set name
- camres = cbind(GeneSet=g,URL=u,camres)
- if (camres\$Direction == "Up")
- {
- upcam = rbind(upcam,camres) } else {
- downcam = rbind(downcam,camres)
- }
- }
- }
- }
- uscam = upcam[order(upcam\$PValue),]
- unadjp = uscam\$PValue
- uscam\$adjPValue = p.adjust(unadjp,method=fdrtype)
- nup = max(10,sum((uscam\$adjPValue < fdrthresh)))
- dscam = downcam[order(downcam\$PValue),]
- unadjp = dscam\$PValue
- dscam\$adjPValue = p.adjust(unadjp,method=fdrtype)
- ndown = max(10,sum((dscam\$adjPValue < fdrthresh)))
- write.table(uscam,file=paste('upCamera',outfname,sep='_'),quote=F,sep='\t',row.names=F)
- write.table(dscam,file=paste('downCamera',outfname,sep='_'),quote=F,sep='\t',row.names=F)
- print.noquote(paste('@@@@@ Camera up top',nup,'gene sets:'))
- write.table(head(uscam,nup),file="",quote=F,sep='\t',row.names=F)
- print.noquote(paste('@@@@@ Camera down top',ndown,'gene sets:'))
- write.table(head(dscam,ndown),file="",quote=F,sep='\t',row.names=F)
-}
-
-
-
-edgeIt = function (Count_Matrix,group,outputfilename,fdrtype='fdr',priordf=5,
- fdrthresh=0.05,outputdir='.', myTitle='edgeR',libSize=c(),useNDF=F,
- filterquantile=0.2, subjects=c(),mydesign=NULL,
- doDESeq=T,doVoom=T,doCamera=T,org='hg19',
- histgmt="", bigmt="/data/genomes/gsea/3.1/Abetterchoice_nocgp_c2_c3_c5_symbols_all.gmt",
- doCook=F,DESeq_fittype="parameteric")
-{
- if (length(unique(group))!=2){
- print("Number of conditions identified in experiment does not equal 2")
- q()
- }
- require(edgeR)
- options(width = 512)
- mt = paste(unlist(strsplit(myTitle,'_')),collapse=" ")
- allN = nrow(Count_Matrix)
- nscut = round(ncol(Count_Matrix)/2)
- colTotmillionreads = colSums(Count_Matrix)/1e6
- rawrs = rowSums(Count_Matrix)
- nonzerod = Count_Matrix[(rawrs > 0),]
- nzN = nrow(nonzerod)
- nzrs = rowSums(nonzerod)
- zN = allN - nzN
- print('**** Quantiles for non-zero row counts:',quote=F)
- print(quantile(nzrs,probs=seq(0,1,0.1)),quote=F)
- if (useNDF == "T")
- {
- gt1rpin3 = rowSums(Count_Matrix/expandAsMatrix(colTotmillionreads,dim(Count_Matrix)) >= 1) >= nscut
- lo = colSums(Count_Matrix[!gt1rpin3,])
- workCM = Count_Matrix[gt1rpin3,]
- cleanrs = rowSums(workCM)
- cleanN = length(cleanrs)
- meth = paste( "After removing",length(lo),"contigs with fewer than ",nscut," sample read counts >= 1 per million, there are",sep="")
- print(paste("Read",allN,"contigs. Removed",zN,"contigs with no reads.",meth,cleanN,"contigs"),quote=F)
- maint = paste('Filter >= 1/million reads in >=',nscut,'samples')
- } else {
- useme = (nzrs > quantile(nzrs,filterquantile))
- workCM = nonzerod[useme,]
- lo = colSums(nonzerod[!useme,])
- cleanrs = rowSums(workCM)
- cleanN = length(cleanrs)
- meth = paste("After filtering at count quantile =",filterquantile,", there are",sep="")
- print(paste('Read',allN,"contigs. Removed",zN,"with no reads.",meth,cleanN,"contigs"),quote=F)
- maint = paste('Filter below',filterquantile,'quantile')
- }
- cumPlot(rawrs=rawrs,cleanrs=cleanrs,maint=maint,myTitle=myTitle)
- allgenes <- rownames(workCM)
- print(paste("*** Total low count contigs per sample = ",paste(lo,collapse=',')),quote=F)
- rsums = rowSums(workCM)
- TName=unique(group)[1]
- CName=unique(group)[2]
- DGEList = DGEList(counts=workCM, group = group)
- DGEList = calcNormFactors(DGEList)
-
- if (is.null(mydesign)) {
- if (length(subjects) == 0)
- {
- mydesign = model.matrix(~group)
- }
- else {
- subjf = factor(subjects)
- mydesign = model.matrix(~subjf+group)
- ### we block on subject so make group last to simplify finding it
- }
- }
- print.noquote(paste('Using samples:',paste(colnames(workCM),collapse=',')))
- print.noquote('Using design matrix:')
- print.noquote(mydesign)
- DGEList = estimateGLMCommonDisp(DGEList,mydesign)
- comdisp = DGEList\$common.dispersion
- DGEList = estimateGLMTrendedDisp(DGEList,mydesign)
- if (priordf > 0) {
- print.noquote(paste("prior.df =",priordf))
- DGEList = estimateGLMTagwiseDisp(DGEList,mydesign,prior.df = priordf)
- } else {
- DGEList = estimateGLMTagwiseDisp(DGEList,mydesign)
- }
- lastcoef=ncol(mydesign)
- print.noquote(paste('*** lastcoef = ',lastcoef))
- estpriorn = getPriorN(DGEList)
- predLFC1 = predFC(DGEList,prior.count=1,design=mydesign,dispersion=DGEList\$tagwise.dispersion,offset=getOffset(DGEList))
- predLFC3 = predFC(DGEList,prior.count=3,design=mydesign,dispersion=DGEList\$tagwise.dispersion,offset=getOffset(DGEList))
- predLFC5 = predFC(DGEList,prior.count=5,design=mydesign,dispersion=DGEList\$tagwise.dispersion,offset=getOffset(DGEList))
- DGLM = glmFit(DGEList,design=mydesign)
- DE = glmLRT(DGLM)
- #### always last one - subject is first if needed
- logCPMnorm = cpm(DGEList,log=T,normalized.lib.sizes=T)
- logCPMraw = cpm(DGEList,log=T,normalized.lib.sizes=F)
- uoutput = cbind(
- Name=as.character(rownames(DGEList\$counts)),
- DE\$table,
- adj.p.value=p.adjust(DE\$table\$PValue, method=fdrtype),
- Dispersion=DGEList\$tagwise.dispersion,totreads=rsums,
- predLFC1=predLFC1[,lastcoef],
- predLFC3=predLFC3[,lastcoef],
- predLFC5=predLFC5[,lastcoef],
- logCPMnorm,
- DGEList\$counts
- )
- soutput = uoutput[order(DE\$table\$PValue),]
- heatlogcpmnorm = logCPMnorm[order(DE\$table\$PValue),]
- goodness = gof(DGLM, pcutoff=fdrthresh)
- noutl = (sum(goodness\$outlier) > 0)
- if (noutl > 0) {
- print.noquote(paste('***',noutl,'GLM outliers found'))
- print(paste(rownames(DGLM)[(goodness\$outlier)],collapse=','),quote=F)
- } else {
- print('*** No GLM fit outlier genes found')
- }
- z = limma::zscoreGamma(goodness\$gof.statistic, shape=goodness\$df/2, scale=2)
- pdf(paste(mt,"GoodnessofFit.pdf",sep='_'))
- qq = qqnorm(z, panel.first=grid(), main="tagwise dispersion")
- abline(0,1,lwd=3)
- points(qq\$x[goodness\$outlier],qq\$y[goodness\$outlier], pch=16, col="maroon")
- dev.off()
- print(paste("Common Dispersion =",comdisp,"CV = ",sqrt(comdisp),"getPriorN = ",estpriorn),quote=F)
- uniqueg = unique(group)
- sample_colors = match(group,levels(group))
- pdf(paste(mt,"MDSplot.pdf",sep='_'))
- sampleTypes = levels(factor(group))
- print.noquote(sampleTypes)
- plotMDS.DGEList(DGEList,main=paste("MDS Plot for",myTitle),cex=0.5,col=sample_colors,pch=sample_colors)
- legend(x="topleft", legend = sampleTypes,col=c(1:length(sampleTypes)), pch=19)
- grid(col="blue")
- dev.off()
- colnames(logCPMnorm) = paste( colnames(logCPMnorm),'N',sep="_")
- print(paste('Raw sample CPM',paste(colSums(logCPMraw,na.rm=T),collapse=',')))
- try(boxPlot(rawrs=logCPMraw,cleanrs=logCPMnorm,maint='TMM Normalisation',myTitle=myTitle))
- nreads = soutput\$totreads
- print('*** writing output',quote=F)
- write.table(soutput,outputfilename, quote=FALSE, sep="\t",row.names=F)
- rn = row.names(workCM)
- print.noquote('@@ rn')
- print.noquote(head(rn))
- reg = "^chr([0-9]+):([0-9]+)-([0-9]+)"
- genecards=" 0.8)
- {
- print("@@ using ucsc substitution for urls")
- urls = paste0(ucsc,"&position=chr",testreg[,2],":",testreg[,3],"-",testreg[,4],"\'>",rn,"")
- } else {
- print("@@ using genecards substitution for urls")
- urls = paste0(genecards,rn,"\'>",rn,"")
- }
- tt = uoutput
- print.noquote("*** edgeR Top tags\n")
- tt = cbind(tt,ntotreads=nreads,URL=urls)
- tt = tt[order(DE\$table\$PValue),]
- print.noquote(tt[1:50,])
- ### Plot MAplot
- deTags = rownames(uoutput[uoutput\$adj.p.value < fdrthresh,])
- nsig = length(deTags)
- print(paste('***',nsig,'tags significant at adj p=',fdrthresh),quote=F)
- if (nsig > 0) {
- print('*** deTags',quote=F)
- print(head(deTags))
- }
- deColours = ifelse(deTags,'red','black')
- pdf(paste(mt,"BCV_vs_abundance.pdf",sep='_'))
- plotBCV(DGEList, cex=0.3, main="Biological CV vs abundance")
- dev.off()
- dg = DGEList[order(DE\$table\$PValue),]
- outpdfname=paste(mt,"heatmap.pdf",sep='_')
- hmap2(heatlogcpmnorm,nsamp=100,TName=TName,group=group,outpdfname=outpdfname,myTitle=myTitle)
- outSmear = paste(mt,"Smearplot.pdf",sep='_')
- outMain = paste("Smear Plot for ",TName,' Vs ',CName,' (FDR@',fdrthresh,' N = ',nsig,')',sep='')
- smearPlot(DGEList=DGEList,deTags=deTags, outSmear=outSmear, outMain = outMain)
- qqPlot(descr=myTitle,pvector=DE\$table\$PValue)
- if (doDESeq == T)
- {
- ### DESeq2
- require('DESeq2')
- print.noquote(paste('****subjects=',subjects,'length=',length(subjects)))
- if (length(subjects) == 0)
- {
- pdata = data.frame(Name=colnames(workCM),Rx=group,row.names=colnames(workCM))
- deSEQds = DESeqDataSetFromMatrix(countData = workCM, colData = pdata, design = formula(~ Rx))
- } else {
- pdata = data.frame(Name=colnames(workCM),Rx=group,subjects=subjects,row.names=colnames(workCM))
- deSEQds = DESeqDataSetFromMatrix(countData = workCM, colData = pdata, design = formula(~ subjects + Rx))
- }
- deSeqDatsizefac <- estimateSizeFactors(deSEQds)
- deSeqDatdisp <- estimateDispersions(deSeqDatsizefac,fitType=DESeq_fittype)
- resDESeq <- nbinomWaldTest(deSeqDatdisp, pAdjustMethod=fdrtype)
- rDESeq = as.data.frame(results(resDESeq))
- srDESeq = rDESeq[order(rDESeq\$pvalue),]
- write.table(srDESeq,paste(mt,'DESeq2_TopTable.xls',sep='_'), quote=FALSE, sep="\t",row.names=F)
- topresults.DESeq <- rDESeq[which(rDESeq\$padj < fdrthresh), ]
- DESeqcountsindex <- which(allgenes %in% rownames(topresults.DESeq))
- DESeqcounts <- rep(0, length(allgenes))
- DESeqcounts[DESeqcountsindex] <- 1
- pdf(paste(mt,"DESeq2_dispersion_estimates.pdf",sep='_'))
- plotDispEsts(resDESeq)
- dev.off()
- if (doCook) {
- pdf(paste(mt,"DESeq2_cooks_distance.pdf",sep='_'))
- W <- mcols(resDESeq)\$WaldStatistic_condition_treated_vs_untreated
- maxCooks <- mcols(resDESeq)\$maxCooks
- idx <- !is.na(W)
- plot(rank(W[idx]), maxCooks[idx], xlab="rank of Wald statistic", ylab="maximum Cook's distance per gene",
- ylim=c(0,5), cex=.4, col="maroon")
- m <- ncol(dds)
- p <- 3
- abline(h=qf(.75, p, m - p),col="darkblue")
- grid(col="lightgray",lty="dotted")
- }
- }
- counts.dataframe = as.data.frame(c())
- norm.factor = DGEList\$samples\$norm.factors
- topresults.edgeR <- soutput[which(soutput\$adj.p.value < fdrthresh), ]
- edgeRcountsindex <- which(allgenes %in% rownames(topresults.edgeR))
- edgeRcounts <- rep(0, length(allgenes))
- edgeRcounts[edgeRcountsindex] <- 1
- if (doVoom == T) {
- pdf(paste(mt,"voomplot.pdf",sep='_'))
- dat.voomed <- voom(DGEList, mydesign, plot = TRUE, normalize.method="quantil", lib.size = NULL)
- dev.off()
- fit <- lmFit(dat.voomed, mydesign)
- fit <- eBayes(fit)
- rvoom <- topTable(fit, coef = length(colnames(mydesign)), adj = "BH", n = Inf)
- write.table(rvoom,paste(mt,'VOOM_topTable.xls',sep='_'), quote=FALSE, sep="\t",row.names=F)
- topresults.voom <- rvoom[which(rvoom\$adj.P.Val < fdrthresh), ]
- voomcountsindex <- which(allgenes %in% rownames(topresults.voom))
- voomcounts <- rep(0, length(allgenes))
- voomcounts[voomcountsindex] <- 1
- }
- if ((doDESeq==T) || (doVoom==T)) {
- if ((doVoom==T) && (doDESeq==T)) {
- vennmain = paste(mt,'Voom,edgeR and DESeq2 overlap at FDR=',fdrthresh)
- counts.dataframe <- data.frame(edgeR = edgeRcounts, DESeq2 = DESeqcounts,
- VOOM_limma = voomcounts, row.names = allgenes)
- } else if (doDESeq==T) {
- vennmain = paste(mt,'DESeq2 and edgeR overlap at FDR=',fdrthresh)
- counts.dataframe <- data.frame(edgeR = edgeRcounts, DESeq2 = DESeqcounts, row.names = allgenes)
- } else if (doVoom==T) {
- vennmain = paste(mt,'Voom and edgeR overlap at FDR=',fdrthresh)
- counts.dataframe <- data.frame(edgeR = edgeRcounts, VOOM_limma = voomcounts, row.names = allgenes)
- }
-
- if (nrow(counts.dataframe > 1)) {
- counts.venn <- vennCounts(counts.dataframe)
- vennf = paste(mt,'venn.pdf',sep='_')
- pdf(vennf)
- vennDiagram(counts.venn,main=vennmain,col="maroon")
- dev.off()
- }
- } ### doDESeq or doVoom
- if (doDESeq==T) {
- cat("*** DESeq top 50\n")
- print(srDESeq[1:50,])
- }
- if (doVoom==T) {
- cat("*** VOOM top 50\n")
- print(rvoom[1:50,])
- }
- if (doCamera) {
- doGSEA(y=DGEList,design=mydesign,histgmt=histgmt,bigmt=bigmt,ntest=20,myTitle=myTitle,
- outfname=paste(mt,"GSEA.xls",sep="_"),fdrthresh=fdrthresh,fdrtype=fdrtype)
- }
- uoutput
-
-}
-#### Done
-
-#### sink(stdout(),append=T,type="message")
-
-doDESeq = $DESeq.doDESeq
-### make these 'T' or 'F'
-doVoom = $doVoom
-doCamera = $camera.doCamera
-Out_Dir = "$html_file.files_path"
-Input = "$input1"
-TreatmentName = "$treatment_name"
-TreatmentCols = "$Treat_cols"
-ControlName = "$control_name"
-ControlCols= "$Control_cols"
-outputfilename = "$outtab"
-org = "$input1.dbkey"
-if (org == "") { org = "hg19"}
-fdrtype = "$fdrtype"
-priordf = $priordf
-fdrthresh = $fdrthresh
-useNDF = "$useNDF"
-fQ = $fQ
-myTitle = "$title"
-sids = strsplit("$subjectids",',')
-subjects = unlist(sids)
-nsubj = length(subjects)
-builtin_gmt=""
-history_gmt=""
-
-builtin_gmt = ""
-history_gmt = ""
-DESeq_fittype=""
-#if $DESeq.doDESeq == "T"
- DESeq_fittype = "$DESeq.DESeq_fitType"
-#end if
-#if $camera.doCamera == 'T'
- #if $camera.gmtSource.refgmtSource == "indexed" or $camera.gmtSource.refgmtSource == "both":
- builtin_gmt = "${camera.gmtSource.builtinGMT.fields.path}"
- #end if
- #if $camera.gmtSource.refgmtSource == "history" or $camera.gmtSource.refgmtSource == "both":
- history_gmt = "${camera.gmtSource.ownGMT}"
- history_gmt_name = "${camera.gmtSource.ownGMT.name}"
- #end if
-#end if
-if (nsubj > 0) {
-if (doDESeq) {
- print('WARNING - cannot yet use DESeq2 for 2 way anova - see the docs')
- doDESeq = F
- }
-}
-TCols = as.numeric(strsplit(TreatmentCols,",")[[1]])-1
-CCols = as.numeric(strsplit(ControlCols,",")[[1]])-1
-cat('Got TCols=')
-cat(TCols)
-cat('; CCols=')
-cat(CCols)
-cat('\n')
-useCols = c(TCols,CCols)
-if (file.exists(Out_Dir) == F) dir.create(Out_Dir)
-Count_Matrix = read.table(Input,header=T,row.names=1,sep='\t') #Load tab file assume header
-snames = colnames(Count_Matrix)
-nsamples = length(snames)
-if (nsubj > 0 & nsubj != nsamples) {
-options("show.error.messages"=T)
-mess = paste('Fatal error: Supplied subject id list',paste(subjects,collapse=','),
- 'has length',nsubj,'but there are',nsamples,'samples',paste(snames,collapse=','))
-write(mess, stderr())
-quit(save="no",status=4)
-}
-
-Count_Matrix = Count_Matrix[,useCols] ### reorder columns
-if (length(subjects) != 0) {subjects = subjects[useCols]}
-rn = rownames(Count_Matrix)
-islib = rn %in% c('librarySize','NotInBedRegions')
-LibSizes = Count_Matrix[subset(rn,islib),][1] # take first
-Count_Matrix = Count_Matrix[subset(rn,! islib),]
-group = c(rep(TreatmentName,length(TCols)), rep(ControlName,length(CCols)) )
-group = factor(group, levels=c(ControlName,TreatmentName))
-colnames(Count_Matrix) = paste(group,colnames(Count_Matrix),sep="_")
-results = edgeIt(Count_Matrix=Count_Matrix,group=group,outputfilename=outputfilename,
- fdrtype='BH',priordf=priordf,fdrthresh=fdrthresh,outputdir='.',
- myTitle='edgeR',useNDF=F,libSize=c(),filterquantile=fQ,subjects=subjects,
- doDESeq=doDESeq,doVoom=doVoom,doCamera=doCamera,org=org,
- histgmt=history_gmt,bigmt=builtin_gmt,DESeq_fittype=DESeq_fittype)
-sessionInfo()
-]]>
-
-
-
-
-**What it does**
-
-Performs digital gene expression analysis between a treatment and control on a count matrix.
-Optionally adds a term for subject if not all samples are independent or if some other factor needs to be blocked in the design.
-
-**Input**
-
-A matrix consisting of non-negative integers. The matrix must have a unique header row identifiying the samples, and a unique set of row names
-as the first column. Typically the row names are gene symbols or probe id's for downstream use in GSEA and other methods.
-
-If you have (eg) paired samples and wish to include a term in the GLM to account for some other factor (subject in the case of paired samples),
-put a comma separated list of indicators for every sample (whether modelled or not!) indicating (eg) the subject number or
-A list of integers, one for each subject or an empty string if samples are all independent.
-If not empty, there must be exactly as many integers in the supplied integer list as there are columns (samples) in the count matrix.
-Integers for samples that are not in the analysis *must* be present in the string as filler even if not used.
-
-So if you have 2 pairs out of 6 samples, you need to put in unique integers for the unpaired ones
-eg if you had 6 samples with the first two independent but the second and third pairs each being from independent subjects. you might use
-8,9,1,1,2,2
-as subject IDs to indicate two paired samples from the same subject in columns 3/4 and 5/6
-
-**Output**
-
-A summary html page with links to the R source code and all the outputs, nice grids of helpful plot thumbnails, and lots
-of logging and the top 50 rows of the topTable.
-
-The main topTables of results are provided as separate excelish tabular files.
-
-They include adjusted p values and dispersions for each region, raw and cpm sample data counts and shrunken (predicted) log fold change estimates.
-These are provided for downstream analyses such as GSEA and are predictions of the logFC you might expect to see
-in an independent replication of your original experiment. Higher number means more shrinkage. Shrinkage is more extreme for low coverage features
-so downstream analyses are more robust against strong effect size estimates based on relatively little experimental information.
-
-**Note on prior.N**
-
-http://seqanswers.com/forums/showthread.php?t=5591 says:
-
-*prior.n*
-
-The value for prior.n determines the amount of smoothing of tagwise dispersions towards the common dispersion.
-You can think of it as like a "weight" for the common value. (It is actually the weight for the common likelihood
-in the weighted likelihood equation). The larger the value for prior.n, the more smoothing, i.e. the closer your
-tagwise dispersion estimates will be to the common dispersion. If you use a prior.n of 1, then that gives the
-common likelihood the weight of one observation.
-
-In answer to your question, it is a good thing to squeeze the tagwise dispersions towards a common value,
-or else you will be using very unreliable estimates of the dispersion. I would not recommend using the value that
-you obtained from estimateSmoothing()---this is far too small and would result in virtually no moderation
-(squeezing) of the tagwise dispersions. How many samples do you have in your experiment?
-What is the experimental design? If you have few samples (less than 6) then I would suggest a prior.n of at least 10.
-If you have more samples, then the tagwise dispersion estimates will be more reliable,
-so you could consider using a smaller prior.n, although I would hesitate to use a prior.n less than 5.
-
-
-From Bioconductor Digest, Vol 118, Issue 5, Gordon writes:
-
-Dear Dorota,
-
-The important settings are prior.df and trend.
-
-prior.n and prior.df are related through prior.df = prior.n * residual.df,
-and your experiment has residual.df = 36 - 12 = 24. So the old setting of
-prior.n=10 is equivalent for your data to prior.df = 240, a very large
-value. Going the other way, the new setting of prior.df=10 is equivalent
-to prior.n=10/24.
-
-To recover old results with the current software you would use
-
- estimateTagwiseDisp(object, prior.df=240, trend="none")
-
-To get the new default from old software you would use
-
- estimateTagwiseDisp(object, prior.n=10/24, trend=TRUE)
-
-Actually the old trend method is equivalent to trend="loess" in the new
-software. You should use plotBCV(object) to see whether a trend is
-required.
-
-Note you could also use
-
- prior.n = getPriorN(object, prior.df=10)
-
-to map between prior.df and prior.n.
-
-** Old rant on variable name changes in bioconductor versions**
-
-BioC authors sometimes make small mostly cosmetic changes to variable names (eg: from p.value to PValue)
-often to make them more internally consistent or self describing. Unfortunately, these improvements
-break existing code in ways that can take a while to track down that relies on the library in ways that can take a while to track down,
-increasing downstream tool maintenance effort uselessly.
-
-Please, don't do that. It hurts us.
-
-
-
-
-
-
-
diff -r 117a5ada6a6a -r 2202872ebbe8 rgToolFactory.py
--- a/rgToolFactory.py Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,730 +0,0 @@
-# rgToolFactory.py
-# see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-#
-# copyright ross lazarus (ross stop lazarus at gmail stop com) May 2012
-#
-# all rights reserved
-# Licensed under the LGPL
-# suggestions for improvement and bug fixes welcome at https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-#
-# march 2014
-# had to remove dependencies because cross toolshed dependencies are not possible - can't pre-specify a toolshed url for graphicsmagick and ghostscript
-# grrrrr - night before a demo
-# added dependencies to a tool_dependencies.xml if html page generated so generated tool is properly portable
-#
-# added ghostscript and graphicsmagick as dependencies
-# fixed a wierd problem where gs was trying to use the new_files_path from universe (database/tmp) as ./database/tmp
-# errors ensued
-#
-# august 2013
-# found a problem with GS if $TMP or $TEMP missing - now inject /tmp and warn
-#
-# july 2013
-# added ability to combine images and individual log files into html output
-# just make sure there's a log file foo.log and it will be output
-# together with all images named like "foo_*.pdf
-# otherwise old format for html
-#
-# January 2013
-# problem pointed out by Carlos Borroto
-# added escaping for <>$ - thought I did that ages ago...
-#
-# August 11 2012
-# changed to use shell=False and cl as a sequence
-
-# This is a Galaxy tool factory for simple scripts in python, R or whatever ails ye.
-# It also serves as the wrapper for the new tool.
-#
-# you paste and run your script
-# Only works for simple scripts that read one input from the history.
-# Optionally can write one new history dataset,
-# and optionally collect any number of outputs into links on an autogenerated HTML page.
-
-# DO NOT install on a public or important site - please.
-
-# installed generated tools are fine if the script is safe.
-# They just run normally and their user cannot do anything unusually insecure
-# but please, practice safe toolshed.
-# Read the fucking code before you install any tool
-# especially this one
-
-# After you get the script working on some test data, you can
-# optionally generate a toolshed compatible gzip file
-# containing your script safely wrapped as an ordinary Galaxy script in your local toolshed for
-# safe and largely automated installation in a production Galaxy.
-
-# If you opt for an HTML output, you get all the script outputs arranged
-# as a single Html history item - all output files are linked, thumbnails for all the pdfs.
-# Ugly but really inexpensive.
-#
-# Patches appreciated please.
-#
-#
-# long route to June 2012 product
-# Behold the awesome power of Galaxy and the toolshed with the tool factory to bind them
-# derived from an integrated script model
-# called rgBaseScriptWrapper.py
-# Note to the unwary:
-# This tool allows arbitrary scripting on your Galaxy as the Galaxy user
-# There is nothing stopping a malicious user doing whatever they choose
-# Extremely dangerous!!
-# Totally insecure. So, trusted users only
-#
-# preferred model is a developer using their throw away workstation instance - ie a private site.
-# no real risk. The universe_wsgi.ini admin_users string is checked - only admin users are permitted to run this tool.
-#
-
-import sys
-import shutil
-import subprocess
-import os
-import time
-import tempfile
-import optparse
-import tarfile
-import re
-import shutil
-import math
-
-progname = os.path.split(sys.argv[0])[1]
-myversion = 'V001.1 March 2014'
-verbose = False
-debug = False
-toolFactoryURL = 'https://bitbucket.org/fubar/galaxytoolfactory'
-
-# if we do html we need these dependencies specified in a tool_dependencies.xml file and referred to in the generated
-# tool xml
-toolhtmldepskel = """
-
-
-
-
-
-
-
-
- %s
-
-
-"""
-
-protorequirements = """
- ghostscript
- graphicsmagick
- """
-
-def timenow():
- """return current time as a string
- """
- return time.strftime('%d/%m/%Y %H:%M:%S', time.localtime(time.time()))
-
-html_escape_table = {
- "&": "&",
- ">": ">",
- "<": "<",
- "$": "\$"
- }
-
-def html_escape(text):
- """Produce entities within text."""
- return "".join(html_escape_table.get(c,c) for c in text)
-
-def cmd_exists(cmd):
- return subprocess.call("type " + cmd, shell=True,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE) == 0
-
-
-def parse_citations(citations_text):
- """
- """
- citations = [c for c in citations_text.split("**ENTRY**") if c.strip()]
- citation_tuples = []
- for citation in citations:
- if citation.startswith("doi"):
- citation_tuples.append( ("doi", citation[len("doi"):].strip() ) )
- else:
- citation_tuples.append( ("bibtex", citation[len("bibtex"):].strip() ) )
- return citation_tuples
-
-
-class ScriptRunner:
- """class is a wrapper for an arbitrary script
- """
-
- def __init__(self,opts=None,treatbashSpecial=True):
- """
- cleanup inputs, setup some outputs
-
- """
- self.useGM = cmd_exists('gm')
- self.useIM = cmd_exists('convert')
- self.useGS = cmd_exists('gs')
- self.temp_warned = False # we want only one warning if $TMP not set
- self.treatbashSpecial = treatbashSpecial
- if opts.output_dir: # simplify for the tool tarball
- os.chdir(opts.output_dir)
- self.thumbformat = 'png'
- self.opts = opts
- self.toolname = re.sub('[^a-zA-Z0-9_]+', '', opts.tool_name) # a sanitizer now does this but..
- self.toolid = self.toolname
- self.myname = sys.argv[0] # get our name because we write ourselves out as a tool later
- self.pyfile = self.myname # crude but efficient - the cruft won't hurt much
- self.xmlfile = '%s.xml' % self.toolname
- s = open(self.opts.script_path,'r').readlines()
- s = [x.rstrip() for x in s] # remove pesky dos line endings if needed
- self.script = '\n'.join(s)
- fhandle,self.sfile = tempfile.mkstemp(prefix=self.toolname,suffix=".%s" % (opts.interpreter))
- tscript = open(self.sfile,'w') # use self.sfile as script source for Popen
- tscript.write(self.script)
- tscript.close()
- self.indentedScript = '\n'.join([' %s' % html_escape(x) for x in s]) # for restructured text in help
- self.escapedScript = '\n'.join([html_escape(x) for x in s])
- self.elog = os.path.join(self.opts.output_dir,"%s_error.log" % self.toolname)
- if opts.output_dir: # may not want these complexities
- self.tlog = os.path.join(self.opts.output_dir,"%s_runner.log" % self.toolname)
- art = '%s.%s' % (self.toolname,opts.interpreter)
- artpath = os.path.join(self.opts.output_dir,art) # need full path
- artifact = open(artpath,'w') # use self.sfile as script source for Popen
- artifact.write(self.script)
- artifact.close()
- self.cl = []
- self.html = []
- a = self.cl.append
- a(opts.interpreter)
- if self.treatbashSpecial and opts.interpreter in ['bash','sh']:
- a(self.sfile)
- else:
- a('-') # stdin
- a(opts.input_tab)
- a(opts.output_tab)
- self.outFormats = 'tabular' # TODO make this an option at tool generation time
- self.inputFormats = 'tabular,txt' # TODO make this an option at tool generation time
- self.test1Input = '%s_test1_input.xls' % self.toolname
- self.test1Output = '%s_test1_output.xls' % self.toolname
- self.test1HTML = '%s_test1_output.html' % self.toolname
-
- def makeXML(self):
- """
- Create a Galaxy xml tool wrapper for the new script as a string to write out
- fixme - use templating or something less fugly than this example of what we produce
-
-
- a tabular file
-
- reverse.py --script_path "$runMe" --interpreter "python"
- --tool_name "reverse" --input_tab "$input1" --output_tab "$tab_file"
-
-
-
-
-
-
-
-
-
-
-
-**What it Does**
-
-Reverse the columns in a tabular file
-
-
-
-
-
-# reverse order of columns in a tabular file
-import sys
-inp = sys.argv[1]
-outp = sys.argv[2]
-i = open(inp,'r')
-o = open(outp,'w')
-for row in i:
- rs = row.rstrip().split('\t')
- rs.reverse()
- o.write('\t'.join(rs))
- o.write('\n')
-i.close()
-o.close()
-
-
-
-
-
-
- """
- newXML="""
-%(tooldesc)s
-%(requirements)s
-
-%(command)s
-
-
-%(inputs)s
-
-
-%(outputs)s
-
-
-
-%(script)s
-
-
-
-%(tooltests)s
-
-
-
-%(help)s
-
-
-
- %(citations)s
- 10.1093/bioinformatics/bts573
-
-""" # needs a dict with toolname, toolid, interpreter, scriptname, command, inputs as a multi line string ready to write, outputs ditto, help ditto
-
- newCommand="""
- %(toolname)s.py --script_path "$runMe" --interpreter "%(interpreter)s"
- --tool_name "%(toolname)s" %(command_inputs)s %(command_outputs)s """
- # may NOT be an input or htmlout - appended later
- tooltestsTabOnly = """
-
-
-
-
-
-
-
-
- """
- tooltestsHTMLOnly = """
-
-
-
-
-
-
-
-
- """
- tooltestsBoth = """
-
-
-
-
-
-
-
-
- """
- xdict = {}
- xdict['requirements'] = ''
- if self.opts.make_HTML:
- if self.opts.include_dependencies == "yes":
- xdict['requirements'] = protorequirements
- xdict['tool_version'] = self.opts.tool_version
- xdict['test1Input'] = self.test1Input
- xdict['test1HTML'] = self.test1HTML
- xdict['test1Output'] = self.test1Output
- if self.opts.make_HTML and self.opts.output_tab <> 'None':
- xdict['tooltests'] = tooltestsBoth % xdict
- elif self.opts.make_HTML:
- xdict['tooltests'] = tooltestsHTMLOnly % xdict
- else:
- xdict['tooltests'] = tooltestsTabOnly % xdict
- xdict['script'] = self.escapedScript
- # configfile is least painful way to embed script to avoid external dependencies
- # but requires escaping of <, > and $ to avoid Mako parsing
- if self.opts.help_text:
- helptext = open(self.opts.help_text,'r').readlines()
- helptext = [html_escape(x) for x in helptext] # must html escape here too - thanks to Marius van den Beek
- xdict['help'] = ''.join([x for x in helptext])
- else:
- xdict['help'] = 'Please ask the tool author (%s) for help as none was supplied at tool generation\n' % (self.opts.user_email)
- if self.opts.citations:
- citationstext = open(self.opts.citations,'r').read()
- citation_tuples = parse_citations(citationstext)
- citations_xml = ""
- for citation_type, citation_content in citation_tuples:
- citation_xml = """%s""" % (citation_type, html_escape(citation_content))
- citations_xml += citation_xml
- xdict['citations'] = citations_xml
- else:
- xdict['citations'] = ""
- coda = ['**Script**','Pressing execute will run the following code over your input file and generate some outputs in your history::']
- coda.append('\n')
- coda.append(self.indentedScript)
- coda.append('\n**Attribution**\nThis Galaxy tool was created by %s at %s\nusing the Galaxy Tool Factory.\n' % (self.opts.user_email,timenow()))
- coda.append('See %s for details of that project' % (toolFactoryURL))
- coda.append('Please cite: Creating re-usable tools from scripts: The Galaxy Tool Factory. Ross Lazarus; Antony Kaspi; Mark Ziemann; The Galaxy Team. ')
- coda.append('Bioinformatics 2012; doi: 10.1093/bioinformatics/bts573\n')
- xdict['help'] = '%s\n%s' % (xdict['help'],'\n'.join(coda))
- if self.opts.tool_desc:
- xdict['tooldesc'] = '%s' % self.opts.tool_desc
- else:
- xdict['tooldesc'] = ''
- xdict['command_outputs'] = ''
- xdict['outputs'] = ''
- if self.opts.input_tab <> 'None':
- xdict['command_inputs'] = '--input_tab "$input1" ' # the space may matter a lot if we append something
- xdict['inputs'] = ' \n' % self.inputFormats
- else:
- xdict['command_inputs'] = '' # assume no input - eg a random data generator
- xdict['inputs'] = ''
- xdict['inputs'] += ' \n' % self.toolname
- xdict['toolname'] = self.toolname
- xdict['toolid'] = self.toolid
- xdict['interpreter'] = self.opts.interpreter
- xdict['scriptname'] = self.sfile
- if self.opts.make_HTML:
- xdict['command_outputs'] += ' --output_dir "$html_file.files_path" --output_html "$html_file" --make_HTML "yes"'
- xdict['outputs'] += ' \n'
- else:
- xdict['command_outputs'] += ' --output_dir "./"'
- if self.opts.output_tab <> 'None':
- xdict['command_outputs'] += ' --output_tab "$tab_file"'
- xdict['outputs'] += ' \n' % self.outFormats
- xdict['command'] = newCommand % xdict
- xmls = newXML % xdict
- xf = open(self.xmlfile,'w')
- xf.write(xmls)
- xf.write('\n')
- xf.close()
- # ready for the tarball
-
-
- def makeTooltar(self):
- """
- a tool is a gz tarball with eg
- /toolname/tool.xml /toolname/tool.py /toolname/test-data/test1_in.foo ...
- """
- retval = self.run()
- if retval:
- print >> sys.stderr,'## Run failed. Cannot build yet. Please fix and retry'
- sys.exit(1)
- tdir = self.toolname
- os.mkdir(tdir)
- self.makeXML()
- if self.opts.make_HTML:
- if self.opts.help_text:
- hlp = open(self.opts.help_text,'r').read()
- else:
- hlp = 'Please ask the tool author for help as none was supplied at tool generation\n'
- if self.opts.include_dependencies:
- tooldepcontent = toolhtmldepskel % hlp
- depf = open(os.path.join(tdir,'tool_dependencies.xml'),'w')
- depf.write(tooldepcontent)
- depf.write('\n')
- depf.close()
- if self.opts.input_tab <> 'None': # no reproducible test otherwise? TODO: maybe..
- testdir = os.path.join(tdir,'test-data')
- os.mkdir(testdir) # make tests directory
- shutil.copyfile(self.opts.input_tab,os.path.join(testdir,self.test1Input))
- if self.opts.output_tab <> 'None':
- shutil.copyfile(self.opts.output_tab,os.path.join(testdir,self.test1Output))
- if self.opts.make_HTML:
- shutil.copyfile(self.opts.output_html,os.path.join(testdir,self.test1HTML))
- if self.opts.output_dir:
- shutil.copyfile(self.tlog,os.path.join(testdir,'test1_out.log'))
- outpif = '%s.py' % self.toolname # new name
- outpiname = os.path.join(tdir,outpif) # path for the tool tarball
- pyin = os.path.basename(self.pyfile) # our name - we rewrite ourselves (TM)
- notes = ['# %s - a self annotated version of %s generated by running %s\n' % (outpiname,pyin,pyin),]
- notes.append('# to make a new Galaxy tool called %s\n' % self.toolname)
- notes.append('# User %s at %s\n' % (self.opts.user_email,timenow()))
- pi = open(self.pyfile,'r').readlines() # our code becomes new tool wrapper (!) - first Galaxy worm
- notes += pi
- outpi = open(outpiname,'w')
- outpi.write(''.join(notes))
- outpi.write('\n')
- outpi.close()
- stname = os.path.join(tdir,self.sfile)
- if not os.path.exists(stname):
- shutil.copyfile(self.sfile, stname)
- xtname = os.path.join(tdir,self.xmlfile)
- if not os.path.exists(xtname):
- shutil.copyfile(self.xmlfile,xtname)
- tarpath = "%s.gz" % self.toolname
- tar = tarfile.open(tarpath, "w:gz")
- tar.add(tdir,arcname=self.toolname)
- tar.close()
- shutil.copyfile(tarpath,self.opts.new_tool)
- shutil.rmtree(tdir)
- ## TODO: replace with optional direct upload to local toolshed?
- return retval
-
-
- def compressPDF(self,inpdf=None,thumbformat='png'):
- """need absolute path to pdf
- note that GS gets confoozled if no $TMP or $TEMP
- so we set it
- """
- assert os.path.isfile(inpdf), "## Input %s supplied to %s compressPDF not found" % (inpdf,self.myName)
- hlog = os.path.join(self.opts.output_dir,"compress_%s.txt" % os.path.basename(inpdf))
- sto = open(hlog,'a')
- our_env = os.environ.copy()
- our_tmp = our_env.get('TMP',None)
- if not our_tmp:
- our_tmp = our_env.get('TEMP',None)
- if not (our_tmp and os.path.exists(our_tmp)):
- newtmp = os.path.join(self.opts.output_dir,'tmp')
- try:
- os.mkdir(newtmp)
- except:
- sto.write('## WARNING - cannot make %s - it may exist or permissions need fixing\n' % newtmp)
- our_env['TEMP'] = newtmp
- if not self.temp_warned:
- sto.write('## WARNING - no $TMP or $TEMP!!! Please fix - using %s temporarily\n' % newtmp)
- self.temp_warned = True
- outpdf = '%s_compressed' % inpdf
- cl = ["gs", "-sDEVICE=pdfwrite", "-dNOPAUSE", "-dUseCIEColor", "-dBATCH","-dPDFSETTINGS=/printer", "-sOutputFile=%s" % outpdf,inpdf]
- x = subprocess.Popen(cl,stdout=sto,stderr=sto,cwd=self.opts.output_dir,env=our_env)
- retval1 = x.wait()
- sto.close()
- if retval1 == 0:
- os.unlink(inpdf)
- shutil.move(outpdf,inpdf)
- os.unlink(hlog)
- hlog = os.path.join(self.opts.output_dir,"thumbnail_%s.txt" % os.path.basename(inpdf))
- sto = open(hlog,'w')
- outpng = '%s.%s' % (os.path.splitext(inpdf)[0],thumbformat)
- if self.useGM:
- cl2 = ['gm', 'convert', inpdf, outpng]
- else: # assume imagemagick
- cl2 = ['convert', inpdf, outpng]
- x = subprocess.Popen(cl2,stdout=sto,stderr=sto,cwd=self.opts.output_dir,env=our_env)
- retval2 = x.wait()
- sto.close()
- if retval2 == 0:
- os.unlink(hlog)
- retval = retval1 or retval2
- return retval
-
-
- def getfSize(self,fpath,outpath):
- """
- format a nice file size string
- """
- size = ''
- fp = os.path.join(outpath,fpath)
- if os.path.isfile(fp):
- size = '0 B'
- n = float(os.path.getsize(fp))
- if n > 2**20:
- size = '%1.1f MB' % (n/2**20)
- elif n > 2**10:
- size = '%1.1f KB' % (n/2**10)
- elif n > 0:
- size = '%d B' % (int(n))
- return size
-
- def makeHtml(self):
- """ Create an HTML file content to list all the artifacts found in the output_dir
- """
-
- galhtmlprefix = """
-
-
\n"""
-
- flist = os.listdir(self.opts.output_dir)
- flist = [x for x in flist if x <> 'Rplots.pdf']
- flist.sort()
- html = []
- html.append(galhtmlprefix % progname)
- html.append('
Galaxy Tool "%s" run at %s
' % (self.toolname,timenow()))
- fhtml = []
- if len(flist) > 0:
- logfiles = [x for x in flist if x.lower().endswith('.log')] # log file names determine sections
- logfiles.sort()
- logfiles = [x for x in logfiles if os.path.abspath(x) <> os.path.abspath(self.tlog)]
- logfiles.append(os.path.abspath(self.tlog)) # make it the last one
- pdflist = []
- npdf = len([x for x in flist if os.path.splitext(x)[-1].lower() == '.pdf'])
- for rownum,fname in enumerate(flist):
- dname,e = os.path.splitext(fname)
- sfsize = self.getfSize(fname,self.opts.output_dir)
- if e.lower() == '.pdf' : # compress and make a thumbnail
- thumb = '%s.%s' % (dname,self.thumbformat)
- pdff = os.path.join(self.opts.output_dir,fname)
- retval = self.compressPDF(inpdf=pdff,thumbformat=self.thumbformat)
- if retval == 0:
- pdflist.append((fname,thumb))
- else:
- pdflist.append((fname,fname))
- if (rownum+1) % 2 == 0:
- fhtml.append('
' % (fname,fname,sfsize))
- for logfname in logfiles: # expect at least tlog - if more
- if os.path.abspath(logfname) == os.path.abspath(self.tlog): # handled later
- sectionname = 'All tool run'
- if (len(logfiles) > 1):
- sectionname = 'Other'
- ourpdfs = pdflist
- else:
- realname = os.path.basename(logfname)
- sectionname = os.path.splitext(realname)[0].split('_')[0] # break in case _ added to log
- ourpdfs = [x for x in pdflist if os.path.basename(x[0]).split('_')[0] == sectionname]
- pdflist = [x for x in pdflist if os.path.basename(x[0]).split('_')[0] <> sectionname] # remove
- nacross = 1
- npdf = len(ourpdfs)
-
- if npdf > 0:
- nacross = math.sqrt(npdf) ## int(round(math.log(npdf,2)))
- if int(nacross)**2 != npdf:
- nacross += 1
- nacross = int(nacross)
- width = min(400,int(1200/nacross))
- html.append('
%s images and outputs
' % sectionname)
- html.append('(Click on a thumbnail image to download the corresponding original PDF image) ')
- ntogo = nacross # counter for table row padding with empty cells
- html.append('
\n
')
- for i,paths in enumerate(ourpdfs):
- fname,thumb = paths
- s= """
\n""" % (fname,thumb,fname,width,fname)
- if ((i+1) % nacross == 0):
- s += '
\n'
- ntogo = 0
- if i < (npdf - 1): # more to come
- s += '
\n')
- else:
- if ntogo > 0: # pad
- html.append('
'*ntogo)
- html.append('\n')
- logt = open(logfname,'r').readlines()
- logtext = [x for x in logt if x.strip() > '']
- html.append('
%s log output
' % sectionname)
- if len(logtext) > 1:
- html.append('\n
\n')
- html += logtext
- html.append('\n
\n')
- else:
- html.append('%s is empty ' % logfname)
- if len(fhtml) > 0:
- fhtml.insert(0,'
Output File Name (click to view)
Size
\n')
- fhtml.append('
')
- html.append('
All output files available for downloading
\n')
- html += fhtml # add all non-pdf files to the end of the display
- else:
- html.append('
### Error - %s returned no files - please confirm that parameters are sane
' % self.opts.interpreter)
- html.append(galhtmlpostfix)
- htmlf = file(self.opts.output_html,'w')
- htmlf.write('\n'.join(html))
- htmlf.write('\n')
- htmlf.close()
- self.html = html
-
-
- def run(self):
- """
- scripts must be small enough not to fill the pipe!
- """
- if self.treatbashSpecial and self.opts.interpreter in ['bash','sh']:
- retval = self.runBash()
- else:
- if self.opts.output_dir:
- ste = open(self.elog,'w')
- sto = open(self.tlog,'w')
- sto.write('## Toolfactory generated command line = %s\n' % ' '.join(self.cl))
- sto.flush()
- p = subprocess.Popen(self.cl,shell=False,stdout=sto,stderr=ste,stdin=subprocess.PIPE,cwd=self.opts.output_dir)
- else:
- p = subprocess.Popen(self.cl,shell=False,stdin=subprocess.PIPE)
- p.stdin.write(self.script)
- p.stdin.close()
- retval = p.wait()
- if self.opts.output_dir:
- sto.close()
- ste.close()
- err = open(self.elog,'r').readlines()
- if retval <> 0 and err: # problem
- print >> sys.stderr,err
- if self.opts.make_HTML:
- self.makeHtml()
- return retval
-
- def runBash(self):
- """
- cannot use - for bash so use self.sfile
- """
- if self.opts.output_dir:
- s = '## Toolfactory generated command line = %s\n' % ' '.join(self.cl)
- sto = open(self.tlog,'w')
- sto.write(s)
- sto.flush()
- p = subprocess.Popen(self.cl,shell=False,stdout=sto,stderr=sto,cwd=self.opts.output_dir)
- else:
- p = subprocess.Popen(self.cl,shell=False)
- retval = p.wait()
- if self.opts.output_dir:
- sto.close()
- if self.opts.make_HTML:
- self.makeHtml()
- return retval
-
-
-def main():
- u = """
- This is a Galaxy wrapper. It expects to be called by a special purpose tool.xml as:
- rgBaseScriptWrapper.py --script_path "$scriptPath" --tool_name "foo" --interpreter "Rscript"
-
- """
- op = optparse.OptionParser()
- a = op.add_option
- a('--script_path',default=None)
- a('--tool_name',default=None)
- a('--interpreter',default=None)
- a('--output_dir',default='./')
- a('--output_html',default=None)
- a('--input_tab',default="None")
- a('--output_tab',default="None")
- a('--user_email',default='Unknown')
- a('--bad_user',default=None)
- a('--make_Tool',default=None)
- a('--make_HTML',default=None)
- a('--help_text',default=None)
- a('--citations',default=None)
- a('--tool_desc',default=None)
- a('--new_tool',default=None)
- a('--tool_version',default=None)
- a('--include_dependencies',default=None)
- opts, args = op.parse_args()
- assert not opts.bad_user,'UNAUTHORISED: %s is NOT authorized to use this tool until Galaxy admin adds %s to admin_users in universe_wsgi.ini' % (opts.bad_user,opts.bad_user)
- assert opts.tool_name,'## Tool Factory expects a tool name - eg --tool_name=DESeq'
- assert opts.interpreter,'## Tool Factory wrapper expects an interpreter - eg --interpreter=Rscript'
- assert os.path.isfile(opts.script_path),'## Tool Factory wrapper expects a script path - eg --script_path=foo.R'
- if opts.output_dir:
- try:
- os.makedirs(opts.output_dir)
- except:
- pass
- r = ScriptRunner(opts)
- if opts.make_Tool:
- retcode = r.makeTooltar()
- else:
- retcode = r.run()
- os.unlink(r.sfile)
- if retcode:
- sys.exit(retcode) # indicate failure to job runner
-
-
-if __name__ == "__main__":
- main()
-
-
diff -r 117a5ada6a6a -r 2202872ebbe8 rgToolFactory.xml
--- a/rgToolFactory.xml Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,365 +0,0 @@
-
- Run a script; make a tool!
-
- ghostscript
- graphicsmagick
-
-
-#if ( $__user_email__ not in $__admin_users__ ):
- rgToolFactory.py --bad_user $__user_email__
-#else:
- rgToolFactory.py --script_path "$runme" --interpreter "$interpreter"
- --tool_name "$tool_name" --user_email "$__user_email__"
- #if $make_TAB.value=="yes":
- --output_tab "$tab_file"
- #end if
- #if $makeMode.make_Tool=="yes":
- --make_Tool "$makeMode.make_Tool"
- --tool_desc "$makeMode.tool_desc"
- --tool_version "$makeMode.tool_version"
- --new_tool "$new_tool"
- --help_text "$helpme"
- #if $make_HTML.value=="yes":
- #if makeMode.include.deps.value=="yes":
- --include_dependencies "yes"
- #end if
- #end if
- --citations "$citeme"
- #end if
- #if $make_HTML.value=="yes":
- --output_dir "$html_file.files_path" --output_html "$html_file" --make_HTML "yes"
- #else:
- --output_dir "."
- #end if
- #if $input1 != 'None':
- --input_tab "$input1"
- #end if
-#end if
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- make_TAB=="yes"
-
-
-
-
-
-
-
-
-
-
-
- make_HTML == "yes"
-
-
- makeMode['make_Tool'] == "yes"
-
-
-
-$dynScript
-
-#if $makeMode.make_Tool == "yes":
-${makeMode.help_text}
-#end if
-
-
-#if $makeMode.make_Tool == "yes":
-#for $citation in $makeMode.citations:
-#if $citation.citation_type.type == "bibtex":
-**ENTRY**bibtex
-${citation.citation_type.bibtex}
-#else
-**ENTRY**doi
-${citation.citation_type.doi}
-#end if
-#end for
-#end if
-
-
-
-
-.. class:: warningmark
-
-**Details and attribution** GTF_
-
-**Local Admins ONLY** Only users whose IDs found in the local admin_user configuration setting in universe_wsgi.ini can run this tool.
-
-**If you find a bug** please raise an issue at the bitbucket repository GTFI_
-
-**What it does** This tool enables a user to paste and submit an arbitrary R/python/perl script to Galaxy.
-
-**Input options** This version is limited to simple transformation or reporting requiring only a single input file selected from the history.
-
-**Output options** Optional script outputs include one single new history tabular file, or for scripts that create multiple outputs,
-a new HTML report linking all the files and images created by the script can be automatically generated.
-
-**Tool Generation option** Once the script is working with test data, this tool will optionally generate a new Galaxy tool in a gzip file
-ready to upload to your local toolshed for sharing and installation. Provide a small sample input when you run generate the tool because
-it will become the input for the generated functional test.
-
-.. class:: warningmark
-
-**Note to system administrators** This tool offers *NO* built in protection against malicious scripts. It should only be installed on private/personnal Galaxy instances.
-Admin_users will have the power to do anything they want as the Galaxy user if you install this tool.
-
-.. class:: warningmark
-
-**Use on public servers** is STRONGLY discouraged for obvious reasons
-
-The tools generated by this tool will run just as securely as any other normal installed Galaxy tool but like any other new tools, should always be checked carefully before installation.
-We recommend that you follow the good code hygiene practices associated with safe toolshed.
-
-**Scripting conventions** The pasted script will be executed with the path to the (optional) input tabular data file path or NONE if you do not select one, and the path to the optional
-output file or None if none is wanted, as the first and second command line parameters. The script must deal appropriately with these - see Rscript examples below.
-Note that if an optional HTML output is selected, all the output files created by the script will be nicely presented as links, with pdf images linked as thumbnails in that output.
-This can be handy for complex scripts creating lots of output.
-
-**Examples**
- $OUTF
-
-A trivial perl script example to show that even perl works::
-
- #
- # change all occurances of a string in a file to another string
- #
- $oldfile = $ARGV[0];
- $newfile = $ARGV[1];
- $old = "gene";
- $new = "foo";
- open(OF, $oldfile);
- open(NF, ">$newfile");
- # read in each line of the file
- while ($line = ) {
- $line =~ s/$old/$new/;
- print NF $line;
- }
- close(OF);
- close(NF);
-
-]]>
-
-**Citation**
-
-
-Paper_ :
-
-Creating re-usable tools from scripts: The Galaxy Tool Factory
-Ross Lazarus; Antony Kaspi; Mark Ziemann; The Galaxy Team
-Bioinformatics 2012; doi: 10.1093/bioinformatics/bts573
-
-
-**Licensing**
-
-Copyright Ross Lazarus (ross period lazarus at gmail period com) May 2012
-All rights reserved.
-Licensed under the LGPL_
-
-.. _LGPL: http://www.gnu.org/copyleft/lesser.html
-.. _GTF: https://bitbucket.org/fubar/galaxytoolfactory
-.. _GTFI: https://bitbucket.org/fubar/galaxytoolfactory/issues
-.. _Paper: http://bioinformatics.oxfordjournals.org/cgi/reprint/bts573?ijkey=lczQh1sWrMwdYWJ&keytype=ref
-
-
-
-
- 10.1093/bioinformatics/bts573
-
-
-
-
diff -r 117a5ada6a6a -r 2202872ebbe8 rgToolFactoryMultIn.py
--- a/rgToolFactoryMultIn.py Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,736 +0,0 @@
-# rgToolFactoryMultIn.py
-# see https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-#
-# copyright ross lazarus (ross stop lazarus at gmail stop com) May 2012
-#
-# all rights reserved
-# Licensed under the LGPL
-# suggestions for improvement and bug fixes welcome at https://bitbucket.org/fubar/galaxytoolfactory/wiki/Home
-#
-# august 2014
-# Allows arbitrary number of input files
-# NOTE positional parameters are now passed to script
-# and output (may be "None") is *before* arbitrary number of inputs
-#
-# march 2014
-# had to remove dependencies because cross toolshed dependencies are not possible - can't pre-specify a toolshed url for graphicsmagick and ghostscript
-# grrrrr - night before a demo
-# added dependencies to a tool_dependencies.xml if html page generated so generated tool is properly portable
-#
-# added ghostscript and graphicsmagick as dependencies
-# fixed a wierd problem where gs was trying to use the new_files_path from universe (database/tmp) as ./database/tmp
-# errors ensued
-#
-# august 2013
-# found a problem with GS if $TMP or $TEMP missing - now inject /tmp and warn
-#
-# july 2013
-# added ability to combine images and individual log files into html output
-# just make sure there's a log file foo.log and it will be output
-# together with all images named like "foo_*.pdf
-# otherwise old format for html
-#
-# January 2013
-# problem pointed out by Carlos Borroto
-# added escaping for <>$ - thought I did that ages ago...
-#
-# August 11 2012
-# changed to use shell=False and cl as a sequence
-
-# This is a Galaxy tool factory for simple scripts in python, R or whatever ails ye.
-# It also serves as the wrapper for the new tool.
-#
-# you paste and run your script
-# Only works for simple scripts that read one input from the history.
-# Optionally can write one new history dataset,
-# and optionally collect any number of outputs into links on an autogenerated HTML page.
-
-# DO NOT install on a public or important site - please.
-
-# installed generated tools are fine if the script is safe.
-# They just run normally and their user cannot do anything unusually insecure
-# but please, practice safe toolshed.
-# Read the fucking code before you install any tool
-# especially this one
-
-# After you get the script working on some test data, you can
-# optionally generate a toolshed compatible gzip file
-# containing your script safely wrapped as an ordinary Galaxy script in your local toolshed for
-# safe and largely automated installation in a production Galaxy.
-
-# If you opt for an HTML output, you get all the script outputs arranged
-# as a single Html history item - all output files are linked, thumbnails for all the pdfs.
-# Ugly but really inexpensive.
-#
-# Patches appreciated please.
-#
-#
-# long route to June 2012 product
-# Behold the awesome power of Galaxy and the toolshed with the tool factory to bind them
-# derived from an integrated script model
-# called rgBaseScriptWrapper.py
-# Note to the unwary:
-# This tool allows arbitrary scripting on your Galaxy as the Galaxy user
-# There is nothing stopping a malicious user doing whatever they choose
-# Extremely dangerous!!
-# Totally insecure. So, trusted users only
-#
-# preferred model is a developer using their throw away workstation instance - ie a private site.
-# no real risk. The universe_wsgi.ini admin_users string is checked - only admin users are permitted to run this tool.
-#
-
-import sys
-import shutil
-import subprocess
-import os
-import time
-import tempfile
-import optparse
-import tarfile
-import re
-import shutil
-import math
-
-progname = os.path.split(sys.argv[0])[1]
-myversion = 'V001.1 March 2014'
-verbose = False
-debug = False
-toolFactoryURL = 'https://bitbucket.org/fubar/galaxytoolfactory'
-
-# if we do html we need these dependencies specified in a tool_dependencies.xml file and referred to in the generated
-# tool xml
-toolhtmldepskel = """
-
-
-
-
-
-
-
-
- %s
-
-
-"""
-
-protorequirements = """
- ghostscript
- graphicsmagick
- """
-
-def timenow():
- """return current time as a string
- """
- return time.strftime('%d/%m/%Y %H:%M:%S', time.localtime(time.time()))
-
-html_escape_table = {
- "&": "&",
- ">": ">",
- "<": "<",
- "$": "\$"
- }
-
-def html_escape(text):
- """Produce entities within text."""
- return "".join(html_escape_table.get(c,c) for c in text)
-
-def cmd_exists(cmd):
- return subprocess.call("type " + cmd, shell=True,
- stdout=subprocess.PIPE, stderr=subprocess.PIPE) == 0
-
-
-class ScriptRunner:
- """class is a wrapper for an arbitrary script
- """
-
- def __init__(self,opts=None,treatbashSpecial=True):
- """
- cleanup inputs, setup some outputs
-
- """
- self.useGM = cmd_exists('gm')
- self.useIM = cmd_exists('convert')
- self.useGS = cmd_exists('gs')
- self.temp_warned = False # we want only one warning if $TMP not set
- self.treatbashSpecial = treatbashSpecial
- if opts.output_dir: # simplify for the tool tarball
- os.chdir(opts.output_dir)
- self.thumbformat = 'png'
- self.opts = opts
- self.toolname = re.sub('[^a-zA-Z0-9_]+', '', opts.tool_name) # a sanitizer now does this but..
- self.toolid = self.toolname
- self.myname = sys.argv[0] # get our name because we write ourselves out as a tool later
- self.pyfile = self.myname # crude but efficient - the cruft won't hurt much
- self.xmlfile = '%s.xml' % self.toolname
- s = open(self.opts.script_path,'r').readlines()
- s = [x.rstrip() for x in s] # remove pesky dos line endings if needed
- self.script = '\n'.join(s)
- fhandle,self.sfile = tempfile.mkstemp(prefix=self.toolname,suffix=".%s" % (opts.interpreter))
- tscript = open(self.sfile,'w') # use self.sfile as script source for Popen
- tscript.write(self.script)
- tscript.close()
- self.indentedScript = '\n'.join([' %s' % html_escape(x) for x in s]) # for restructured text in help
- self.escapedScript = '\n'.join([html_escape(x) for x in s])
- self.elog = os.path.join(self.opts.output_dir,"%s_error.log" % self.toolname)
- if opts.output_dir: # may not want these complexities
- self.tlog = os.path.join(self.opts.output_dir,"%s_runner.log" % self.toolname)
- art = '%s.%s' % (self.toolname,opts.interpreter)
- artpath = os.path.join(self.opts.output_dir,art) # need full path
- artifact = open(artpath,'w') # use self.sfile as script source for Popen
- artifact.write(self.script)
- artifact.close()
- self.cl = []
- self.html = []
- self.test1Inputs = [] # now a list
- a = self.cl.append
- a(opts.interpreter)
- if self.treatbashSpecial and opts.interpreter in ['bash','sh']:
- a(self.sfile)
- else:
- a('-') # stdin
- # if multiple inputs - positional or need to distinguish them with cl params
- if opts.output_tab:
- a('%s' % opts.output_tab)
- if opts.input_tab:
- tests = []
- for i,intab in enumerate(opts.input_tab): # if multiple, make tests
- if intab.find(',') <> -1:
- (gpath,uname) = intab.split(',')
- else:
- gpath = uname = intab
- a('"%s"' % (intab))
- tests.append(os.path.basename(gpath))
- self.test1Inputs = '' % (','.join(tests))
- else:
- self.test1Inputs = ''
- self.outFormats = opts.output_format
- self.inputFormats = opts.input_formats
- self.test1Output = '%s_test1_output.xls' % self.toolname
- self.test1HTML = '%s_test1_output.html' % self.toolname
-
- def makeXML(self):
- """
- Create a Galaxy xml tool wrapper for the new script as a string to write out
- fixme - use templating or something less fugly than this example of what we produce
-
-
- a tabular file
-
- reverse.py --script_path "$runMe" --interpreter "python"
- --tool_name "reverse" --input_tab "$input1" --output_tab "$tab_file"
-
-
-
-
-
-
-
-
-
-
-
-**What it Does**
-
-Reverse the columns in a tabular file
-
-
-
-
-
-# reverse order of columns in a tabular file
-import sys
-inp = sys.argv[1]
-outp = sys.argv[2]
-i = open(inp,'r')
-o = open(outp,'w')
-for row in i:
- rs = row.rstrip().split('\t')
- rs.reverse()
- o.write('\t'.join(rs))
- o.write('\n')
-i.close()
-o.close()
-
-
-
-
-
-
- """
- newXML="""
-%(tooldesc)s
-%(requirements)s
-
-%(command)s
-
-
-%(inputs)s
-
-
-%(outputs)s
-
-
-
-%(script)s
-
-
-
-%(tooltests)s
-
-
-
-%(help)s
-
-
-""" # needs a dict with toolname, toolid, interpreter, scriptname, command, inputs as a multi line string ready to write, outputs ditto, help ditto
-
- newCommand="""
- %(toolname)s.py --script_path "$runMe" --interpreter "%(interpreter)s"
- --tool_name "%(toolname)s"
- %(command_inputs)s
- %(command_outputs)s
- """
- # may NOT be an input or htmlout - appended later
- tooltestsTabOnly = """
-
- %(test1Inputs)s
-
-
-
-
-
- """
- tooltestsHTMLOnly = """
-
- %(test1Inputs)s
-
-
-
-
-
- """
- tooltestsBoth = """
-
- %(test1Inputs)s
-
-
-
-
-
-
- """
- xdict = {}
- xdict['requirements'] = ''
- if self.opts.make_HTML:
- if self.opts.include_dependencies == "yes":
- xdict['requirements'] = protorequirements
- xdict['tool_version'] = self.opts.tool_version
- xdict['test1HTML'] = self.test1HTML
- xdict['test1Output'] = self.test1Output
- xdict['test1Inputs'] = self.test1Inputs
- if self.opts.make_HTML and self.opts.output_tab <> 'None':
- xdict['tooltests'] = tooltestsBoth % xdict
- elif self.opts.make_HTML:
- xdict['tooltests'] = tooltestsHTMLOnly % xdict
- else:
- xdict['tooltests'] = tooltestsTabOnly % xdict
- xdict['script'] = self.escapedScript
- # configfile is least painful way to embed script to avoid external dependencies
- # but requires escaping of <, > and $ to avoid Mako parsing
- if self.opts.help_text:
- helptext = open(self.opts.help_text,'r').readlines()
- helptext = [html_escape(x) for x in helptext] # must html escape here too - thanks to Marius van den Beek
- xdict['help'] = ''.join([x for x in helptext])
- else:
- xdict['help'] = 'Please ask the tool author (%s) for help as none was supplied at tool generation\n' % (self.opts.user_email)
- coda = ['**Script**','Pressing execute will run the following code over your input file and generate some outputs in your history::']
- coda.append('\n')
- coda.append(self.indentedScript)
- coda.append('\n**Attribution**\nThis Galaxy tool was created by %s at %s\nusing the Galaxy Tool Factory.\n' % (self.opts.user_email,timenow()))
- coda.append('See %s for details of that project' % (toolFactoryURL))
- coda.append('Please cite: Creating re-usable tools from scripts: The Galaxy Tool Factory. Ross Lazarus; Antony Kaspi; Mark Ziemann; The Galaxy Team. ')
- coda.append('Bioinformatics 2012; doi: 10.1093/bioinformatics/bts573\n')
- xdict['help'] = '%s\n%s' % (xdict['help'],'\n'.join(coda))
- if self.opts.tool_desc:
- xdict['tooldesc'] = '%s' % self.opts.tool_desc
- else:
- xdict['tooldesc'] = ''
- xdict['command_outputs'] = ''
- xdict['outputs'] = ''
- if self.opts.input_tab <> 'None':
- cins = ['\n',]
- cins.append('#for intab in $input1:')
- cins.append('--input_tab "$intab"')
- cins.append('#end for\n')
- xdict['command_inputs'] = '\n'.join(cins)
- xdict['inputs'] = ''' \n''' % self.inputFormats
- else:
- xdict['command_inputs'] = '' # assume no input - eg a random data generator
- xdict['inputs'] = ''
- xdict['inputs'] += ' \n' % self.toolname
- xdict['toolname'] = self.toolname
- xdict['toolid'] = self.toolid
- xdict['interpreter'] = self.opts.interpreter
- xdict['scriptname'] = self.sfile
- if self.opts.make_HTML:
- xdict['command_outputs'] += ' --output_dir "$html_file.files_path" --output_html "$html_file" --make_HTML "yes"'
- xdict['outputs'] += ' \n'
- else:
- xdict['command_outputs'] += ' --output_dir "./"'
- if self.opts.output_tab <> 'None':
- xdict['command_outputs'] += ' --output_tab "$tab_file"'
- xdict['outputs'] += ' \n' % self.outFormats
- xdict['command'] = newCommand % xdict
- xmls = newXML % xdict
- xf = open(self.xmlfile,'w')
- xf.write(xmls)
- xf.write('\n')
- xf.close()
- # ready for the tarball
-
-
- def makeTooltar(self):
- """
- a tool is a gz tarball with eg
- /toolname/tool.xml /toolname/tool.py /toolname/test-data/test1_in.foo ...
- """
- retval = self.run()
- if retval:
- print >> sys.stderr,'## Run failed. Cannot build yet. Please fix and retry'
- sys.exit(1)
- tdir = self.toolname
- os.mkdir(tdir)
- self.makeXML()
- if self.opts.make_HTML:
- if self.opts.help_text:
- hlp = open(self.opts.help_text,'r').read()
- else:
- hlp = 'Please ask the tool author for help as none was supplied at tool generation\n'
- if self.opts.include_dependencies == "yes":
- tooldepcontent = toolhtmldepskel % hlp
- depf = open(os.path.join(tdir,'tool_dependencies.xml'),'w')
- depf.write(tooldepcontent)
- depf.write('\n')
- depf.close()
- if self.opts.input_tab <> 'None': # no reproducible test otherwise? TODO: maybe..
- testdir = os.path.join(tdir,'test-data')
- os.mkdir(testdir) # make tests directory
- for i,intab in enumerate(self.opts.input_tab):
- si = self.opts.input_tab[i]
- if si.find(',') <> -1:
- s = si.split(',')[0]
- si = s
- dest = os.path.join(testdir,os.path.basename(si))
- if si <> dest:
- shutil.copyfile(si,dest)
- if self.opts.output_tab <> 'None':
- shutil.copyfile(self.opts.output_tab,os.path.join(testdir,self.test1Output))
- if self.opts.make_HTML:
- shutil.copyfile(self.opts.output_html,os.path.join(testdir,self.test1HTML))
- if self.opts.output_dir:
- shutil.copyfile(self.tlog,os.path.join(testdir,'test1_out.log'))
- outpif = '%s.py' % self.toolname # new name
- outpiname = os.path.join(tdir,outpif) # path for the tool tarball
- pyin = os.path.basename(self.pyfile) # our name - we rewrite ourselves (TM)
- notes = ['# %s - a self annotated version of %s generated by running %s\n' % (outpiname,pyin,pyin),]
- notes.append('# to make a new Galaxy tool called %s\n' % self.toolname)
- notes.append('# User %s at %s\n' % (self.opts.user_email,timenow()))
- pi = open(self.pyfile,'r').readlines() # our code becomes new tool wrapper (!) - first Galaxy worm
- notes += pi
- outpi = open(outpiname,'w')
- outpi.write(''.join(notes))
- outpi.write('\n')
- outpi.close()
- stname = os.path.join(tdir,self.sfile)
- if not os.path.exists(stname):
- shutil.copyfile(self.sfile, stname)
- xtname = os.path.join(tdir,self.xmlfile)
- if not os.path.exists(xtname):
- shutil.copyfile(self.xmlfile,xtname)
- tarpath = "%s.gz" % self.toolname
- tar = tarfile.open(tarpath, "w:gz")
- tar.add(tdir,arcname=self.toolname)
- tar.close()
- shutil.copyfile(tarpath,self.opts.new_tool)
- shutil.rmtree(tdir)
- ## TODO: replace with optional direct upload to local toolshed?
- return retval
-
-
- def compressPDF(self,inpdf=None,thumbformat='png'):
- """need absolute path to pdf
- note that GS gets confoozled if no $TMP or $TEMP
- so we set it
- """
- assert os.path.isfile(inpdf), "## Input %s supplied to %s compressPDF not found" % (inpdf,self.myName)
- hlog = os.path.join(self.opts.output_dir,"compress_%s.txt" % os.path.basename(inpdf))
- sto = open(hlog,'a')
- our_env = os.environ.copy()
- our_tmp = our_env.get('TMP',None)
- if not our_tmp:
- our_tmp = our_env.get('TEMP',None)
- if not (our_tmp and os.path.exists(our_tmp)):
- newtmp = os.path.join(self.opts.output_dir,'tmp')
- try:
- os.mkdir(newtmp)
- except:
- sto.write('## WARNING - cannot make %s - it may exist or permissions need fixing\n' % newtmp)
- our_env['TEMP'] = newtmp
- if not self.temp_warned:
- sto.write('## WARNING - no $TMP or $TEMP!!! Please fix - using %s temporarily\n' % newtmp)
- self.temp_warned = True
- outpdf = '%s_compressed' % inpdf
- cl = ["gs", "-sDEVICE=pdfwrite", "-dNOPAUSE", "-dUseCIEColor", "-dBATCH","-dPDFSETTINGS=/printer", "-sOutputFile=%s" % outpdf,inpdf]
- x = subprocess.Popen(cl,stdout=sto,stderr=sto,cwd=self.opts.output_dir,env=our_env)
- retval1 = x.wait()
- sto.close()
- if retval1 == 0:
- os.unlink(inpdf)
- shutil.move(outpdf,inpdf)
- os.unlink(hlog)
- hlog = os.path.join(self.opts.output_dir,"thumbnail_%s.txt" % os.path.basename(inpdf))
- sto = open(hlog,'w')
- outpng = '%s.%s' % (os.path.splitext(inpdf)[0],thumbformat)
- if self.useGM:
- cl2 = ['gm', 'convert', inpdf, outpng]
- else: # assume imagemagick
- cl2 = ['convert', inpdf, outpng]
- x = subprocess.Popen(cl2,stdout=sto,stderr=sto,cwd=self.opts.output_dir,env=our_env)
- retval2 = x.wait()
- sto.close()
- if retval2 == 0:
- os.unlink(hlog)
- retval = retval1 or retval2
- return retval
-
-
- def getfSize(self,fpath,outpath):
- """
- format a nice file size string
- """
- size = ''
- fp = os.path.join(outpath,fpath)
- if os.path.isfile(fp):
- size = '0 B'
- n = float(os.path.getsize(fp))
- if n > 2**20:
- size = '%1.1f MB' % (n/2**20)
- elif n > 2**10:
- size = '%1.1f KB' % (n/2**10)
- elif n > 0:
- size = '%d B' % (int(n))
- return size
-
- def makeHtml(self):
- """ Create an HTML file content to list all the artifacts found in the output_dir
- """
-
- galhtmlprefix = """
-
-
-
-
-
-
-
-
\n"""
-
- flist = os.listdir(self.opts.output_dir)
- flist = [x for x in flist if x <> 'Rplots.pdf']
- flist.sort()
- html = []
- html.append(galhtmlprefix % progname)
- html.append('
Galaxy Tool "%s" run at %s
' % (self.toolname,timenow()))
- fhtml = []
- if len(flist) > 0:
- logfiles = [x for x in flist if x.lower().endswith('.log')] # log file names determine sections
- logfiles.sort()
- logfiles = [x for x in logfiles if os.path.abspath(x) <> os.path.abspath(self.tlog)]
- logfiles.append(os.path.abspath(self.tlog)) # make it the last one
- pdflist = []
- npdf = len([x for x in flist if os.path.splitext(x)[-1].lower() == '.pdf'])
- for rownum,fname in enumerate(flist):
- dname,e = os.path.splitext(fname)
- sfsize = self.getfSize(fname,self.opts.output_dir)
- if e.lower() == '.pdf' : # compress and make a thumbnail
- thumb = '%s.%s' % (dname,self.thumbformat)
- pdff = os.path.join(self.opts.output_dir,fname)
- retval = self.compressPDF(inpdf=pdff,thumbformat=self.thumbformat)
- if retval == 0:
- pdflist.append((fname,thumb))
- else:
- pdflist.append((fname,fname))
- if (rownum+1) % 2 == 0:
- fhtml.append('
' % (fname,fname,sfsize))
- for logfname in logfiles: # expect at least tlog - if more
- if os.path.abspath(logfname) == os.path.abspath(self.tlog): # handled later
- sectionname = 'All tool run'
- if (len(logfiles) > 1):
- sectionname = 'Other'
- ourpdfs = pdflist
- else:
- realname = os.path.basename(logfname)
- sectionname = os.path.splitext(realname)[0].split('_')[0] # break in case _ added to log
- ourpdfs = [x for x in pdflist if os.path.basename(x[0]).split('_')[0] == sectionname]
- pdflist = [x for x in pdflist if os.path.basename(x[0]).split('_')[0] <> sectionname] # remove
- nacross = 1
- npdf = len(ourpdfs)
-
- if npdf > 0:
- nacross = math.sqrt(npdf) ## int(round(math.log(npdf,2)))
- if int(nacross)**2 != npdf:
- nacross += 1
- nacross = int(nacross)
- width = min(400,int(1200/nacross))
- html.append('
%s images and outputs
' % sectionname)
- html.append('(Click on a thumbnail image to download the corresponding original PDF image) ')
- ntogo = nacross # counter for table row padding with empty cells
- html.append('
\n
')
- for i,paths in enumerate(ourpdfs):
- fname,thumb = paths
- s= """
\n""" % (fname,thumb,fname,width,fname)
- if ((i+1) % nacross == 0):
- s += '
\n'
- ntogo = 0
- if i < (npdf - 1): # more to come
- s += '
\n')
- else:
- if ntogo > 0: # pad
- html.append('
'*ntogo)
- html.append('\n')
- logt = open(logfname,'r').readlines()
- logtext = [x for x in logt if x.strip() > '']
- html.append('
%s log output
' % sectionname)
- if len(logtext) > 1:
- html.append('\n
\n')
- html += logtext
- html.append('\n
\n')
- else:
- html.append('%s is empty ' % logfname)
- if len(fhtml) > 0:
- fhtml.insert(0,'
Output File Name (click to view)
Size
\n')
- fhtml.append('
')
- html.append('
All output files available for downloading
\n')
- html += fhtml # add all non-pdf files to the end of the display
- else:
- html.append('
### Error - %s returned no files - please confirm that parameters are sane
' % self.opts.interpreter)
- html.append(galhtmlpostfix)
- htmlf = file(self.opts.output_html,'w')
- htmlf.write('\n'.join(html))
- htmlf.write('\n')
- htmlf.close()
- self.html = html
-
-
- def run(self):
- """
- scripts must be small enough not to fill the pipe!
- """
- if self.treatbashSpecial and self.opts.interpreter in ['bash','sh']:
- retval = self.runBash()
- else:
- if self.opts.output_dir:
- ste = open(self.elog,'w')
- sto = open(self.tlog,'w')
- sto.write('## Toolfactory generated command line = %s\n' % ' '.join(self.cl))
- sto.flush()
- p = subprocess.Popen(self.cl,shell=False,stdout=sto,stderr=ste,stdin=subprocess.PIPE,cwd=self.opts.output_dir)
- else:
- p = subprocess.Popen(self.cl,shell=False,stdin=subprocess.PIPE)
- p.stdin.write(self.script)
- p.stdin.close()
- retval = p.wait()
- if self.opts.output_dir:
- sto.close()
- ste.close()
- err = open(self.elog,'r').readlines()
- if retval <> 0 and err: # problem
- print >> sys.stderr,err
- if self.opts.make_HTML:
- self.makeHtml()
- return retval
-
- def runBash(self):
- """
- cannot use - for bash so use self.sfile
- """
- if self.opts.output_dir:
- s = '## Toolfactory generated command line = %s\n' % ' '.join(self.cl)
- sto = open(self.tlog,'w')
- sto.write(s)
- sto.flush()
- p = subprocess.Popen(self.cl,shell=False,stdout=sto,stderr=sto,cwd=self.opts.output_dir)
- else:
- p = subprocess.Popen(self.cl,shell=False)
- retval = p.wait()
- if self.opts.output_dir:
- sto.close()
- if self.opts.make_HTML:
- self.makeHtml()
- return retval
-
-
-def main():
- u = """
- This is a Galaxy wrapper. It expects to be called by a special purpose tool.xml as:
- rgBaseScriptWrapper.py --script_path "$scriptPath" --tool_name "foo" --interpreter "Rscript"
-
- """
- op = optparse.OptionParser()
- a = op.add_option
- a('--script_path',default=None)
- a('--tool_name',default=None)
- a('--interpreter',default=None)
- a('--output_dir',default='./')
- a('--output_html',default=None)
- a('--input_tab',default=[], action="append")
- a("--input_formats",default="tabular")
- a('--output_tab',default="None")
- a('--output_format',default='tabular')
- a('--user_email',default='Unknown')
- a('--bad_user',default=None)
- a('--make_Tool',default=None)
- a('--make_HTML',default=None)
- a('--help_text',default=None)
- a('--tool_desc',default=None)
- a('--new_tool',default=None)
- a('--tool_version',default=None)
- a('--include_dependencies',default=None)
- opts, args = op.parse_args()
- assert not opts.bad_user,'UNAUTHORISED: %s is NOT authorized to use this tool until Galaxy admin adds %s to admin_users in universe_wsgi.ini' % (opts.bad_user,opts.bad_user)
- assert opts.tool_name,'## Tool Factory expects a tool name - eg --tool_name=DESeq'
- assert opts.interpreter,'## Tool Factory wrapper expects an interpreter - eg --interpreter=Rscript'
- assert os.path.isfile(opts.script_path),'## Tool Factory wrapper expects a script path - eg --script_path=foo.R'
- if opts.output_dir:
- try:
- os.makedirs(opts.output_dir)
- except:
- pass
- opts.input_tab = [x.replace('"','').replace("'",'') for x in opts.input_tab]
- r = ScriptRunner(opts)
- if opts.make_Tool:
- retcode = r.makeTooltar()
- else:
- retcode = r.run()
- os.unlink(r.sfile)
- if retcode:
- sys.exit(retcode) # indicate failure to job runner
-
-
-if __name__ == "__main__":
- main()
-
-
diff -r 117a5ada6a6a -r 2202872ebbe8 rgToolFactoryMultIn.xml
--- a/rgToolFactoryMultIn.xml Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,343 +0,0 @@
-
- Makes scripts into tools
-
- ghostscript
- graphicsmagick
-
-
-#if ( $__user_email__ not in $__admin_users__ ):
- rgToolFactoryMultIn.py --bad_user $__user_email__
-#else:
- rgToolFactoryMultIn.py --script_path "$runme" --interpreter "$interpreter"
- --tool_name "$tool_name" --user_email "$__user_email__"
- #if $make_TAB.value=="yes":
- --output_tab "$output1"
- --output_format "$output_format"
- #end if
- #if $makeMode.make_Tool=="yes":
- --make_Tool "$makeMode.make_Tool"
- --tool_desc "$makeMode.tool_desc"
- --tool_version "$makeMode.tool_version"
- --new_tool "$new_tool"
- --help_text "$helpme"
- #if $make_HTML.value=="yes":
- #if $makeMode.include_deps.value=="yes":
- --include_dependencies "yes"
- #end if
- #end if
- #end if
- #if $make_HTML.value=="yes":
- --output_dir "$html_file.files_path" --output_html "$html_file" --make_HTML "yes"
- #else:
- --output_dir "."
- #end if
- #if $input1 != 'None':
- --input_formats "$input_formats"
- #for intab in $input1:
- #if $add_names.value == "yes":
- --input_tab "$intab,$intab.name"
- #else:
- --input_tab "$intab"
- #end if
- #end for
- --input_formats = "$input_formats"
- #end if
-#end if
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
- make_TAB=="yes"
-
-
-
-
-
-
-
- make_HTML == "yes"
-
-
- makeMode['make_Tool'] == "yes"
-
-
-
-$dynScript
-
-#if $makeMode.make_Tool == "yes":
-${makeMode.help_text}
-#end if
-
-
-
-
-.. class:: warningmark
-
-**Details and attribution** GTF_
-
-**Local Admins ONLY** Only users whose IDs found in the local admin_user configuration setting in universe_wsgi.ini can run this tool.
-
-**If you find a bug** please raise an issue at the bitbucket repository GTFI_
-
-**What it does** This tool enables a user to paste and submit an arbitrary R/python/perl script to Galaxy.
-
-**Input options** This version is limited to simple transformation or reporting requiring only a single input file selected from the history.
-
-**Output options** Optional script outputs include one single new history tabular file, or for scripts that create multiple outputs,
-a new HTML report linking all the files and images created by the script can be automatically generated.
-
-**Tool Generation option** Once the script is working with test data, this tool will optionally generate a new Galaxy tool in a gzip file
-ready to upload to your local toolshed for sharing and installation. Provide a small sample input when you run generate the tool because
-it will become the input for the generated functional test.
-
-.. class:: warningmark
-
-**Note to system administrators** This tool offers *NO* built in protection against malicious scripts. It should only be installed on private/personnal Galaxy instances.
-Admin_users will have the power to do anything they want as the Galaxy user if you install this tool.
-
-.. class:: warningmark
-
-**Use on public servers** is STRONGLY discouraged for obvious reasons
-
-The tools generated by this tool will run just as securely as any other normal installed Galaxy tool but like any other new tools, should always be checked carefully before installation.
-We recommend that you follow the good code hygiene practices associated with safe toolshed.
-
-**Scripting conventions** The pasted script will be executed with the path to the (optional) input tabular data file path or NONE if you do not select one, and the path to the optional
-output file or None if none is wanted, as the first and second command line parameters. The script must deal appropriately with these - see Rscript examples below.
-Note that if an optional HTML output is selected, all the output files created by the script will be nicely presented as links, with pdf images linked as thumbnails in that output.
-This can be handy for complex scripts creating lots of output.
-
-**Examples**
- $OUTF
-
-A trivial perl script example to show that even perl works::
-
- #
- # change all occurances of a string in a file to another string
- #
- $oldfile = $ARGV[0];
- $newfile = $ARGV[1];
- $old = "gene";
- $new = "foo";
- open(OF, $oldfile);
- open(NF, ">$newfile");
- # read in each line of the file
- while ($line = ) {
- $line =~ s/$old/$new/;
- print NF $line;
- }
- close(OF);
- close(NF);
-
-]]>
-
-**Citation**
-
-
-Paper_ :
-
-Creating re-usable tools from scripts: The Galaxy Tool Factory
-Ross Lazarus; Antony Kaspi; Mark Ziemann; The Galaxy Team
-Bioinformatics 2012; doi: 10.1093/bioinformatics/bts573
-
-
-**Licensing**
-
-Copyright Ross Lazarus (ross period lazarus at gmail period com) May 2012
-All rights reserved.
-Licensed under the LGPL_
-
-.. _LGPL: http://www.gnu.org/copyleft/lesser.html
-.. _GTF: https://bitbucket.org/fubar/galaxytoolfactory
-.. _GTFI: https://bitbucket.org/fubar/galaxytoolfactory/issues
-.. _Paper: http://bioinformatics.oxfordjournals.org/cgi/reprint/bts573?ijkey=lczQh1sWrMwdYWJ&keytype=ref
-
-
-
-
-
-
-
diff -r 117a5ada6a6a -r 2202872ebbe8 tool_dependencies.xml
--- a/tool_dependencies.xml Thu Aug 28 02:34:24 2014 -0400
+++ /dev/null Thu Jan 01 00:00:00 1970 +0000
@@ -1,12 +0,0 @@
-
-
-
-
-
-
-
-
-
- Only Admins can use this tool generator but please do NOT install on a public facing Galaxy as it exposes unrestricted scripting as your Galaxy user
-
-