Repository 'unmicst'
hg clone https://toolshed.g2.bx.psu.edu/repos/perssond/unmicst

Changeset 0:6bec4fef6b2e (2021-03-12)
Next changeset 1:74fe58ff55a5 (2022-09-07)
Commit message:
"planemo upload for repository https://github.com/ohsu-comp-bio/unmicst commit 73e4cae15f2d7cdc86719e77470eb00af4b6ebb7-dirty"
added:
UnMicst.py
batchUNet2DTMACycif.py
batchUNet2DtCycif.py
batchUnMicst.py
macros.xml
models/CytoplasmIncell/checkpoint
models/CytoplasmIncell/datasetMean.data
models/CytoplasmIncell/datasetStDev.data
models/CytoplasmIncell/hp.data
models/CytoplasmIncell/model.ckpt.data-00000-of-00001
models/CytoplasmIncell/model.ckpt.index
models/CytoplasmIncell/model.ckpt.meta
models/CytoplasmIncell2/datasetMean.data
models/CytoplasmIncell2/datasetStDev.data
models/CytoplasmIncell2/hp.data
models/CytoplasmIncell2/model.ckpt.data-00000-of-00001
models/CytoplasmIncell2/model.ckpt.index
models/CytoplasmIncell2/model.ckpt.meta
models/CytoplasmZeissNikon/checkpoint
models/CytoplasmZeissNikon/datasetMean.data
models/CytoplasmZeissNikon/datasetStDev.data
models/CytoplasmZeissNikon/hp.data
models/CytoplasmZeissNikon/model.ckpt.data-00000-of-00001
models/CytoplasmZeissNikon/model.ckpt.index
models/CytoplasmZeissNikon/model.ckpt.meta
models/mousenucleiDAPI/checkpoint
models/mousenucleiDAPI/datasetMean.data
models/mousenucleiDAPI/datasetStDev.data
models/mousenucleiDAPI/hp.data
models/mousenucleiDAPI/model.ckpt.data-00000-of-00001
models/mousenucleiDAPI/model.ckpt.index
models/mousenucleiDAPI/model.ckpt.meta
models/mousenucleiDAPI/nuclei20x2bin1chan.data-00000-of-00001
models/mousenucleiDAPI/nuclei20x2bin1chan.index
models/mousenucleiDAPI/nuclei20x2bin1chan.meta
models/nucleiDAPI/checkpoint
models/nucleiDAPI/datasetMean.data
models/nucleiDAPI/datasetStDev.data
models/nucleiDAPI/hp.data
models/nucleiDAPI/model.ckpt.data-00000-of-00001
models/nucleiDAPI/model.ckpt.index
models/nucleiDAPI/model.ckpt.meta
models/nucleiDAPI1-5/checkpoint
models/nucleiDAPI1-5/datasetMean.data
models/nucleiDAPI1-5/datasetStDev.data
models/nucleiDAPI1-5/hp.data
models/nucleiDAPI1-5/model.ckpt.index
models/nucleiDAPI1-5/model.ckpt.meta
models/nucleiDAPILAMIN/checkpoint
models/nucleiDAPILAMIN/datasetMean.data
models/nucleiDAPILAMIN/datasetStDev.data
models/nucleiDAPILAMIN/hp.data
models/nucleiDAPILAMIN/model.ckpt.index
models/nucleiDAPILAMIN/model.ckpt.meta
toolbox/GPUselect.py
toolbox/PartitionOfImage.py
toolbox/__pycache__/GPUselect.cpython-37.pyc
toolbox/__pycache__/PartitionOfImage.cpython-36.pyc
toolbox/__pycache__/PartitionOfImage.cpython-37.pyc
toolbox/__pycache__/__init__.cpython-36.pyc
toolbox/__pycache__/ftools.cpython-36.pyc
toolbox/__pycache__/ftools.cpython-37.pyc
toolbox/__pycache__/imtools.cpython-36.pyc
toolbox/__pycache__/imtools.cpython-37.pyc
toolbox/ftools.py
toolbox/imtools.py
unmicst.xml
b
diff -r 000000000000 -r 6bec4fef6b2e UnMicst.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/UnMicst.py Fri Mar 12 00:17:29 2021 +0000
[
b'@@ -0,0 +1,674 @@\n+import numpy as np\r\n+from scipy import misc\r\n+import tensorflow.compat.v1 as tf\r\n+import shutil\r\n+import scipy.io as sio\r\n+import os, fnmatch, glob\r\n+import skimage.exposure as sk\r\n+import skimage.io\r\n+import argparse\r\n+import czifile\r\n+from nd2reader import ND2Reader\r\n+import tifffile\r\n+import sys\r\n+tf.disable_v2_behavior()\r\n+#sys.path.insert(0, \'C:\\\\Users\\\\Public\\\\Documents\\\\ImageScience\')\r\n+\r\n+from toolbox.imtools import *\r\n+from toolbox.ftools import *\r\n+from toolbox.PartitionOfImage import PI2D\r\n+from toolbox import GPUselect\r\n+\r\n+def concat3(lst):\r\n+\treturn tf.concat(lst, 3)\r\n+\r\n+\r\n+class UNet2D:\r\n+\thp = None  # hyper-parameters\r\n+\tnn = None  # network\r\n+\ttfTraining = None  # if training or not (to handle batch norm)\r\n+\ttfData = None  # data placeholder\r\n+\tSession = None\r\n+\tDatasetMean = 0\r\n+\tDatasetStDev = 0\r\n+\r\n+\tdef setupWithHP(hp):\r\n+\t\tUNet2D.setup(hp[\'imSize\'],\r\n+\t\t\t\t\t hp[\'nChannels\'],\r\n+\t\t\t\t\t hp[\'nClasses\'],\r\n+\t\t\t\t\t hp[\'nOut0\'],\r\n+\t\t\t\t\t hp[\'featMapsFact\'],\r\n+\t\t\t\t\t hp[\'downSampFact\'],\r\n+\t\t\t\t\t hp[\'ks\'],\r\n+\t\t\t\t\t hp[\'nExtraConvs\'],\r\n+\t\t\t\t\t hp[\'stdDev0\'],\r\n+\t\t\t\t\t hp[\'nLayers\'],\r\n+\t\t\t\t\t hp[\'batchSize\'])\r\n+\r\n+\tdef setup(imSize, nChannels, nClasses, nOut0, featMapsFact, downSampFact, kernelSize, nExtraConvs, stdDev0,\r\n+\t\t\t  nDownSampLayers, batchSize):\r\n+\t\tUNet2D.hp = {\'imSize\': imSize,\r\n+\t\t\t\t\t \'nClasses\': nClasses,\r\n+\t\t\t\t\t \'nChannels\': nChannels,\r\n+\t\t\t\t\t \'nExtraConvs\': nExtraConvs,\r\n+\t\t\t\t\t \'nLayers\': nDownSampLayers,\r\n+\t\t\t\t\t \'featMapsFact\': featMapsFact,\r\n+\t\t\t\t\t \'downSampFact\': downSampFact,\r\n+\t\t\t\t\t \'ks\': kernelSize,\r\n+\t\t\t\t\t \'nOut0\': nOut0,\r\n+\t\t\t\t\t \'stdDev0\': stdDev0,\r\n+\t\t\t\t\t \'batchSize\': batchSize}\r\n+\r\n+\t\tnOutX = [UNet2D.hp[\'nChannels\'], UNet2D.hp[\'nOut0\']]\r\n+\t\tdsfX = []\r\n+\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\tnOutX.append(nOutX[-1] * UNet2D.hp[\'featMapsFact\'])\r\n+\t\t\tdsfX.append(UNet2D.hp[\'downSampFact\'])\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'placeholders\'):\r\n+\t\t\tUNet2D.tfTraining = tf.placeholder(tf.bool, name=\'training\')\r\n+\t\t\tUNet2D.tfData = tf.placeholder("float", shape=[None, UNet2D.hp[\'imSize\'], UNet2D.hp[\'imSize\'],\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t   UNet2D.hp[\'nChannels\']], name=\'data\')\r\n+\r\n+\t\tdef down_samp_layer(data, index):\r\n+\t\t\twith tf.name_scope(\'ld%d\' % index):\r\n+\t\t\t\tldXWeights1 = tf.Variable(\r\n+\t\t\t\t\ttf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index], nOutX[index + 1]],\r\n+\t\t\t\t\t\t\t\t\t\tstddev=stdDev0), name=\'kernel1\')\r\n+\t\t\t\tldXWeightsExtra = []\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tldXWeightsExtra.append(tf.Variable(\r\n+\t\t\t\t\t\ttf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index + 1], nOutX[index + 1]],\r\n+\t\t\t\t\t\t\t\t\t\t\tstddev=stdDev0), name=\'kernelExtra%d\' % i))\r\n+\r\n+\t\t\t\tc00 = tf.nn.conv2d(data, ldXWeights1, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tc00 = tf.nn.conv2d(tf.nn.relu(c00), ldXWeightsExtra[i], strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tldXWeightsShortcut = tf.Variable(\r\n+\t\t\t\t\ttf.truncated_normal([1, 1, nOutX[index], nOutX[index + 1]], stddev=stdDev0), name=\'shortcutWeights\')\r\n+\t\t\t\tshortcut = tf.nn.conv2d(data, ldXWeightsShortcut, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tbn = tf.layers.batch_normalization(tf.nn.relu(c00 + shortcut), training=UNet2D.tfTraining)\r\n+\r\n+\t\t\t\treturn tf.nn.max_pool(bn, ksize=[1, dsfX[index], dsfX[index], 1],\r\n+\t\t\t\t\t\t\t\t\t  strides=[1, dsfX[index], dsfX[index], 1], padding=\'SAME\', name=\'maxpool\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# bottom layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'lb\'):\r\n+\t\t\tlbWeights1 = tf.Variable(tf.truncated_normal(\r\n+\t\t\t\t[UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[UNet2D.hp[\'nLayers\']], nOutX[UNet2D.hp[\'nLayers\'] + 1]],\r\n+\t\t\t\tstddev=stdDev0), name=\'kernel1\')\r\n+\r\n+\t\t\tdef lb(hidden):\r\n+\t\t\t\treturn tf.nn.relu(tf.nn.conv2d(hidden, lbWeights1, strides=[1, 1, 1, 1], padding=\'SAM'..b"PU,args.mean,args.std)\r\n+\tnClass = UNet2D.hp['nClasses']\r\n+\timagePath = args.imagePath\r\n+\tdapiChannel = args.channel\r\n+\tdsFactor = args.scalingFactor\r\n+\tparentFolder = os.path.dirname(os.path.dirname(imagePath))\r\n+\tfileName = os.path.basename(imagePath)\r\n+\tfileNamePrefix = fileName.split(os.extsep, 1)\r\n+\tprint(fileName)\r\n+\tfileType = fileNamePrefix[1]\r\n+\r\n+\tif fileType=='ome.tif' or fileType == 'btf' :\r\n+\t\tI = skio.imread(imagePath, img_num=dapiChannel,plugin='tifffile')\r\n+\telif fileType == 'tif' :\r\n+\t\tI = tifffile.imread(imagePath, key=dapiChannel)\r\n+\telif fileType == 'czi':\r\n+\t\twith czifile.CziFile(imagePath) as czi:\r\n+\t\t\timage = czi.asarray()\r\n+\t\t\tI = image[0, 0, dapiChannel, 0, 0, :, :, 0]\r\n+\telif fileType == 'nd2':\r\n+\t\twith ND2Reader(imagePath) as fullStack:\r\n+\t\t\tI = fullStack[dapiChannel]\r\n+\r\n+\tif args.classOrder == -1:\r\n+\t\targs.classOrder = range(nClass)\r\n+\r\n+\trawI = I\r\n+\tprint(type(I))\r\n+\thsize = int((float(I.shape[0]) * float(dsFactor)))\r\n+\tvsize = int((float(I.shape[1]) * float(dsFactor)))\r\n+\tI = resize(I, (hsize, vsize))\r\n+\tif args.outlier == -1:\r\n+\t\tmaxLimit = np.max(I)\r\n+\telse:\r\n+\t\tmaxLimit = np.percentile(I, args.outlier)\r\n+\tI = im2double(sk.rescale_intensity(I, in_range=(np.min(I), maxLimit), out_range=(0, 0.983)))\r\n+\trawI = im2double(rawI) / np.max(im2double(rawI))\r\n+\tif not args.outputPath:\r\n+\t\targs.outputPath = parentFolder + '//probability_maps'\r\n+\r\n+\tif not os.path.exists(args.outputPath):\r\n+\t\tos.makedirs(args.outputPath)\r\n+\r\n+\tappend_kwargs = {\r\n+\t\t'bigtiff': True,\r\n+\t\t'metadata': None,\r\n+\t\t'append': True,\r\n+\t}\r\n+\tsave_kwargs = {\r\n+\t\t'bigtiff': True,\r\n+\t\t'metadata': None,\r\n+\t\t'append': False,\r\n+\t}\r\n+\tif args.stackOutput:\r\n+\t\tslice=0\r\n+\t\tfor iClass in args.classOrder[::-1]:\r\n+\t\t\tPM = np.uint8(255*UNet2D.singleImageInference(I, 'accumulate', iClass)) # backwards in order to align with ilastik...\r\n+\t\t\tPM = resize(PM, (rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tif slice==0:\r\n+\t\t\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_Probabilities_' + str(dapiChannel) + '.tif', np.uint8(255 * PM),**save_kwargs)\r\n+\t\t\telse:\r\n+\t\t\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_Probabilities_' + str(dapiChannel) + '.tif',np.uint8(255 * PM),**append_kwargs)\r\n+\t\t\tif slice==1:\r\n+\t\t\t\tsave_kwargs['append'] = False\r\n+\t\t\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_Preview_' + str(dapiChannel) + '.tif',\tnp.uint8(255 * PM), **save_kwargs)\r\n+\t\t\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_Preview_' + str(dapiChannel) + '.tif', np.uint8(255 * rawI), **append_kwargs)\r\n+\t\t\tslice = slice + 1\r\n+\r\n+\telse:\r\n+\t\tcontours = np.uint8(255*UNet2D.singleImageInference(I, 'accumulate', args.classOrder[1]))\r\n+\t\thsize = int((float(I.shape[0]) * float(1 / dsFactor)))\r\n+\t\tvsize = int((float(I.shape[1]) * float(1 / dsFactor)))\r\n+\t\tcontours = resize(contours, (rawI.shape[0], rawI.shape[1]))\r\n+\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_ContoursPM_' + str(dapiChannel) + '.tif',np.uint8(255 * contours),**save_kwargs)\r\n+\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_ContoursPM_' + str(dapiChannel) + '.tif',np.uint8(255 * rawI), **append_kwargs)\r\n+\t\tdel contours\r\n+\t\tnuclei = np.uint8(255*UNet2D.singleImageInference(I, 'accumulate', args.classOrder[2]))\r\n+\t\tnuclei = resize(nuclei, (rawI.shape[0], rawI.shape[1]))\r\n+\t\tskimage.io.imsave(args.outputPath + '//' + fileNamePrefix[0] + '_NucleiPM_' + str(dapiChannel) + '.tif',np.uint8(255 * nuclei), **save_kwargs)\r\n+\t\tdel nuclei\r\n+\tUNet2D.singleImageInferenceCleanup()\r\n+\r\n+#aligned output files to reflect ilastik\r\n+#outputting all classes as single file\r\n+#handles multiple formats including tif, ome.tif, nd2, czi\r\n+#selectable models (human nuclei, mouse nuclei, cytoplasm)\r\n+\r\n+#added legacy function to save output files\r\n+#append save function to reduce memory footprint\r\n+#added --classOrder parameter to specify which class is background, contours, and nuclei respectively\n\\ No newline at end of file\n"
b
diff -r 000000000000 -r 6bec4fef6b2e batchUNet2DTMACycif.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/batchUNet2DTMACycif.py Fri Mar 12 00:17:29 2021 +0000
[
b'@@ -0,0 +1,594 @@\n+import numpy as np\r\n+from scipy import misc\r\n+import tensorflow as tf\r\n+import shutil\r\n+import scipy.io as sio\r\n+import os,fnmatch,PIL,glob\r\n+import skimage.exposure as sk\r\n+\r\n+import sys\r\n+sys.path.insert(0, \'C:\\\\Users\\\\Public\\\\Documents\\\\ImageScience\')\r\n+from toolbox.imtools import *\r\n+from toolbox.ftools import *\r\n+from toolbox.PartitionOfImage import PI2D\r\n+\r\n+\r\n+def concat3(lst):\r\n+\t\treturn tf.concat(lst,3)\r\n+\r\n+class UNet2D:\r\n+\thp = None # hyper-parameters\r\n+\tnn = None # network\r\n+\ttfTraining = None # if training or not (to handle batch norm)\r\n+\ttfData = None # data placeholder\r\n+\tSession = None\r\n+\tDatasetMean = 0\r\n+\tDatasetStDev = 0\r\n+\r\n+\tdef setupWithHP(hp):\r\n+\t\tUNet2D.setup(hp[\'imSize\'],\r\n+\t\t\t\t\t hp[\'nChannels\'],\r\n+\t\t\t\t\t hp[\'nClasses\'],\r\n+\t\t\t\t\t hp[\'nOut0\'],\r\n+\t\t\t\t\t hp[\'featMapsFact\'],\r\n+\t\t\t\t\t hp[\'downSampFact\'],\r\n+\t\t\t\t\t hp[\'ks\'],\r\n+\t\t\t\t\t hp[\'nExtraConvs\'],\r\n+\t\t\t\t\t hp[\'stdDev0\'],\r\n+\t\t\t\t\t hp[\'nLayers\'],\r\n+\t\t\t\t\t hp[\'batchSize\'])\r\n+\r\n+\tdef setup(imSize,nChannels,nClasses,nOut0,featMapsFact,downSampFact,kernelSize,nExtraConvs,stdDev0,nDownSampLayers,batchSize):\r\n+\t\tUNet2D.hp = {\'imSize\':imSize,\r\n+\t\t\t\t\t \'nClasses\':nClasses,\r\n+\t\t\t\t\t \'nChannels\':nChannels,\r\n+\t\t\t\t\t \'nExtraConvs\':nExtraConvs,\r\n+\t\t\t\t\t \'nLayers\':nDownSampLayers,\r\n+\t\t\t\t\t \'featMapsFact\':featMapsFact,\r\n+\t\t\t\t\t \'downSampFact\':downSampFact,\r\n+\t\t\t\t\t \'ks\':kernelSize,\r\n+\t\t\t\t\t \'nOut0\':nOut0,\r\n+\t\t\t\t\t \'stdDev0\':stdDev0,\r\n+\t\t\t\t\t \'batchSize\':batchSize}\r\n+\r\n+\t\tnOutX = [UNet2D.hp[\'nChannels\'],UNet2D.hp[\'nOut0\']]\r\n+\t\tdsfX = []\r\n+\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\tnOutX.append(nOutX[-1]*UNet2D.hp[\'featMapsFact\'])\r\n+\t\t\tdsfX.append(UNet2D.hp[\'downSampFact\'])\r\n+\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'placeholders\'):\r\n+\t\t\tUNet2D.tfTraining = tf.placeholder(tf.bool, name=\'training\')\r\n+\t\t\tUNet2D.tfData = tf.placeholder("float", shape=[None,UNet2D.hp[\'imSize\'],UNet2D.hp[\'imSize\'],UNet2D.hp[\'nChannels\']],name=\'data\')\r\n+\r\n+\t\tdef down_samp_layer(data,index):\r\n+\t\t\twith tf.name_scope(\'ld%d\' % index):\r\n+\t\t\t\tldXWeights1 = tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index], nOutX[index+1]], stddev=stdDev0),name=\'kernel1\')\r\n+\t\t\t\tldXWeightsExtra = []\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tldXWeightsExtra.append(tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index+1], nOutX[index+1]], stddev=stdDev0),name=\'kernelExtra%d\' % i))\r\n+\t\t\t\t\r\n+\t\t\t\tc00 = tf.nn.conv2d(data, ldXWeights1, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tc00 = tf.nn.conv2d(tf.nn.relu(c00), ldXWeightsExtra[i], strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tldXWeightsShortcut = tf.Variable(tf.truncated_normal([1, 1, nOutX[index], nOutX[index+1]], stddev=stdDev0),name=\'shortcutWeights\')\r\n+\t\t\t\tshortcut = tf.nn.conv2d(data, ldXWeightsShortcut, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tbn = tf.layers.batch_normalization(tf.nn.relu(c00+shortcut), training=UNet2D.tfTraining)\r\n+\r\n+\t\t\t\treturn tf.nn.max_pool(bn, ksize=[1, dsfX[index], dsfX[index], 1], strides=[1, dsfX[index], dsfX[index], 1], padding=\'SAME\',name=\'maxpool\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# bottom layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'lb\'):\r\n+\t\t\tlbWeights1 = tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[UNet2D.hp[\'nLayers\']], nOutX[UNet2D.hp[\'nLayers\']+1]], stddev=stdDev0),name=\'kernel1\')\r\n+\t\t\tdef lb(hidden):\r\n+\t\t\t\treturn tf.nn.relu(tf.nn.conv2d(hidden, lbWeights1, strides=[1, 1, 1, 1], padding=\'SAME\'),name=\'conv\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'downsampling\'):    \r\n+\t\t\tdsX = []\r\n+\t\t\tdsX.append(UNet2D.tfData)\r\n+\r\n+\t\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\t\tdsX.append(down_samp_layer'..b"chOutput(i-j+k,pm)\r\n+\t\t\t\t\t# PI2D.patchOutput(i-j+k,normalize(imgradmag(PI2D.getPatch(i-j+k),1)))\r\n+\r\n+\t\treturn PI2D.getValidOutput()\r\n+\r\n+\r\n+if __name__ == '__main__':\r\n+\tlogPath = 'D:\\\\LSP\\\\Sinem\\\\fromOlympus\\\\TFLogsssssssss'\r\n+\tmodelPath = 'D:\\\\LSP\\\\UNet\\\\tonsil20x1bin1chan\\\\TFModel - 3class 16 kernels 5ks 2 layers'\r\n+\tpmPath = 'D:\\\\LSP\\\\Sinem\\\\fromOlympus\\\\TFProbMaps'\r\n+\r\n+\t\r\n+\t# ----- test 1 -----\r\n+\r\n+\t# imPath = 'D:\\\\LSP\\\\Sinem\\\\trainingSetContours\\\\trainingSetSmallLarge'\r\n+\t# # UNet2D.setup(128,1,2,8,2,2,3,1,0.1,2,8)\r\n+\t# # UNet2D.train(imPath,logPath,modelPath,pmPath,500,100,40,True,20000,1,0)\r\n+\t# UNet2D.setup(128, 1, 2, 12, 2, 2, 3, 4, 0.1, 4, 8)\r\n+\t# UNet2D.train(imPath, logPath, modelPath, pmPath, 1600, 400, 500, False, 150000, 1, 0)\r\n+\t# UNet2D.deploy(imPath,100,modelPath,pmPath,1,0)\r\n+\r\n+\t# I = im2double(tifread('/home/mc457/files/CellBiology/IDAC/Marcelo/Etc/UNetTestSets/SinemSaka_NucleiSegmentation_SingleImageInferenceTest3.tif'))\r\n+\t# UNet2D.singleImageInferenceSetup(modelPath,0)\r\n+\t# J = UNet2D.singleImageInference(I,'accumulate',0)\r\n+\t# UNet2D.singleImageInferenceCleanup()\r\n+\t# # imshowlist([I,J])\r\n+\t# # sys.exit(0)\r\n+\t# # tifwrite(np.uint8(255*I),'/home/mc457/Workspace/I1.tif')\r\n+\t# # tifwrite(np.uint8(255*J),'/home/mc457/Workspace/I2.tif')\r\n+\t# K = np.zeros((2,I.shape[0],I.shape[1]))\r\n+\t# K[0,:,:] = I\r\n+\t# K[1,:,:] = J\r\n+\t# tifwrite(np.uint8(255*K),'/home/mc457/Workspace/Sinem_NucSeg.tif')\r\n+\r\n+\tUNet2D.singleImageInferenceSetup(modelPath, 1)\r\n+\timagePath ='Y:/sorger/data/RareCyte/Clarence/NKI_TMA'\r\n+\tsampleList = glob.glob(imagePath + '/ZTMA_18_810*')\r\n+\tdapiChannel = 0\r\n+\tfor iSample in sampleList:\r\n+\t\t# fileList = glob.glob(iSample + '//dearray//*.tif')\r\n+\t\tfileList = [x for x in glob.glob(iSample + '/dearray/*.tif') if x != (iSample+'/dearray\\\\TMA_MAP.tif')]\r\n+\t\tprint(fileList)\r\n+\t\tfor iFile in fileList:\r\n+\t\t\tfileName = os.path.basename(iFile)\r\n+\t\t\tfileNamePrefix = fileName.split(os.extsep, 1)\r\n+\t\t\tI = tifffile.imread(iFile, key=dapiChannel)\r\n+\t\t\tI = im2double(sk.rescale_intensity(I, in_range=(np.min(I), np.max(I)), out_range=(0, 64424)))\r\n+\t\t\t# I=np.moveaxis(I,0,-1)\r\n+\t\t\t# I=I[:,:,0]\r\n+\t\t\thsize = int((float(I.shape[0])*float(1)))\r\n+\t\t\tvsize = int((float(I.shape[1])*float(1)))\r\n+\t\t\tI = resize(I,(hsize,vsize))\r\n+\t\t\t\t#I = im2double(tifread('D:\\\\LSP\\\\cycif\\\\Unet\\\\Caitlin\\\\E - 04(fld 8 wv UV - DAPI)downsampled.tif'))\r\n+\t\t\toutputPath = iSample + '//prob_maps'\r\n+\t\t\tif not os.path.exists(outputPath):\r\n+\t\t\t\tos.makedirs(outputPath)\r\n+\t\t\tK = np.zeros((2,I.shape[0],I.shape[1]))\r\n+\t\t\tcontours = UNet2D.singleImageInference(I,'accumulate',1)\r\n+\t\t\tK[1,:,:] = I\r\n+\t\t\tK[0,:,:] = contours\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + '//' + fileNamePrefix[0] + '_ContoursPM_' + str(dapiChannel + 1) + '.tif')\r\n+\t\t\tdel K\r\n+\t\t\tK = np.zeros((1, I.shape[0], I.shape[1]))\r\n+\t\t\tnuclei = UNet2D.singleImageInference(I,'accumulate',2)\r\n+\t\t\tK[0, :, :] = nuclei\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + '//' + fileNamePrefix[0] + '_NucleiPM_' + str(dapiChannel + 1) + '.tif')\r\n+\t\t\tdel K\r\n+\tUNet2D.singleImageInferenceCleanup()\r\n+\r\n+\r\n+\t# ----- test 2 -----\r\n+\r\n+\t# imPath = '/home/mc457/files/CellBiology/IDAC/Marcelo/Etc/UNetTestSets/ClarenceYapp_NucleiSegmentation'\r\n+\t# UNet2D.setup(128,1,2,8,2,2,3,1,0.1,3,4)\r\n+\t# UNet2D.train(imPath,logPath,modelPath,pmPath,800,100,100,False,10,1)\r\n+\t# UNet2D.deploy(imPath,100,modelPath,pmPath,1)\r\n+\r\n+\r\n+\t# ----- test 3 -----\r\n+\r\n+\t# imPath = '/home/mc457/files/CellBiology/IDAC/Marcelo/Etc/UNetTestSets/CarmanLi_CellTypeSegmentation'\r\n+\t# # UNet2D.setup(256,1,2,8,2,2,3,1,0.1,3,4)\r\n+\t# # UNet2D.train(imPath,logPath,modelPath,pmPath,1400,100,164,False,10000,1)\r\n+\t# UNet2D.deploy(imPath,164,modelPath,pmPath,1)\r\n+\r\n+\r\n+\t# ----- test 4 -----\r\n+\r\n+\t# imPath = '/home/cicconet/Downloads/TrainSet1'\r\n+\t# UNet2D.setup(64,1,2,8,2,2,3,1,0.1,3,4)\r\n+\t# UNet2D.train(imPath,logPath,modelPath,pmPath,200,8,8,False,2000,1,0)\r\n+\t# # UNet2D.deploy(imPath,164,modelPath,pmPath,1)\n\\ No newline at end of file\n"
b
diff -r 000000000000 -r 6bec4fef6b2e batchUNet2DtCycif.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/batchUNet2DtCycif.py Fri Mar 12 00:17:29 2021 +0000
[
b'@@ -0,0 +1,553 @@\n+import numpy as np\r\n+from scipy import misc\r\n+import tensorflow as tf\r\n+import shutil\r\n+import scipy.io as sio\r\n+import os,fnmatch,glob\r\n+import skimage.exposure as sk\r\n+\r\n+import sys\r\n+sys.path.insert(0, \'C:\\\\Users\\\\Clarence\\\\Documents\\\\UNet code\\\\ImageScience\')\r\n+from toolbox.imtools import *\r\n+from toolbox.ftools import *\r\n+from toolbox.PartitionOfImage import PI2D\r\n+\r\n+\r\n+def concat3(lst):\r\n+\t\treturn tf.concat(lst,3)\r\n+\r\n+class UNet2D:\r\n+\thp = None # hyper-parameters\r\n+\tnn = None # network\r\n+\ttfTraining = None # if training or not (to handle batch norm)\r\n+\ttfData = None # data placeholder\r\n+\tSession = None\r\n+\tDatasetMean = 0\r\n+\tDatasetStDev = 0\r\n+\r\n+\tdef setupWithHP(hp):\r\n+\t\tUNet2D.setup(hp[\'imSize\'],\r\n+\t\t\t\t\t hp[\'nChannels\'],\r\n+\t\t\t\t\t hp[\'nClasses\'],\r\n+\t\t\t\t\t hp[\'nOut0\'],\r\n+\t\t\t\t\t hp[\'featMapsFact\'],\r\n+\t\t\t\t\t hp[\'downSampFact\'],\r\n+\t\t\t\t\t hp[\'ks\'],\r\n+\t\t\t\t\t hp[\'nExtraConvs\'],\r\n+\t\t\t\t\t hp[\'stdDev0\'],\r\n+\t\t\t\t\t hp[\'nLayers\'],\r\n+\t\t\t\t\t hp[\'batchSize\'])\r\n+\r\n+\tdef setup(imSize,nChannels,nClasses,nOut0,featMapsFact,downSampFact,kernelSize,nExtraConvs,stdDev0,nDownSampLayers,batchSize):\r\n+\t\tUNet2D.hp = {\'imSize\':imSize,\r\n+\t\t\t\t\t \'nClasses\':nClasses,\r\n+\t\t\t\t\t \'nChannels\':nChannels,\r\n+\t\t\t\t\t \'nExtraConvs\':nExtraConvs,\r\n+\t\t\t\t\t \'nLayers\':nDownSampLayers,\r\n+\t\t\t\t\t \'featMapsFact\':featMapsFact,\r\n+\t\t\t\t\t \'downSampFact\':downSampFact,\r\n+\t\t\t\t\t \'ks\':kernelSize,\r\n+\t\t\t\t\t \'nOut0\':nOut0,\r\n+\t\t\t\t\t \'stdDev0\':stdDev0,\r\n+\t\t\t\t\t \'batchSize\':batchSize}\r\n+\r\n+\t\tnOutX = [UNet2D.hp[\'nChannels\'],UNet2D.hp[\'nOut0\']]\r\n+\t\tdsfX = []\r\n+\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\tnOutX.append(nOutX[-1]*UNet2D.hp[\'featMapsFact\'])\r\n+\t\t\tdsfX.append(UNet2D.hp[\'downSampFact\'])\r\n+\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'placeholders\'):\r\n+\t\t\tUNet2D.tfTraining = tf.placeholder(tf.bool, name=\'training\')\r\n+\t\t\tUNet2D.tfData = tf.placeholder("float", shape=[None,UNet2D.hp[\'imSize\'],UNet2D.hp[\'imSize\'],UNet2D.hp[\'nChannels\']],name=\'data\')\r\n+\r\n+\t\tdef down_samp_layer(data,index):\r\n+\t\t\twith tf.name_scope(\'ld%d\' % index):\r\n+\t\t\t\tldXWeights1 = tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index], nOutX[index+1]], stddev=stdDev0),name=\'kernel1\')\r\n+\t\t\t\tldXWeightsExtra = []\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tldXWeightsExtra.append(tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index+1], nOutX[index+1]], stddev=stdDev0),name=\'kernelExtra%d\' % i))\r\n+\t\t\t\t\r\n+\t\t\t\tc00 = tf.nn.conv2d(data, ldXWeights1, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tc00 = tf.nn.conv2d(tf.nn.relu(c00), ldXWeightsExtra[i], strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tldXWeightsShortcut = tf.Variable(tf.truncated_normal([1, 1, nOutX[index], nOutX[index+1]], stddev=stdDev0),name=\'shortcutWeights\')\r\n+\t\t\t\tshortcut = tf.nn.conv2d(data, ldXWeightsShortcut, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tbn = tf.layers.batch_normalization(tf.nn.relu(c00+shortcut), training=UNet2D.tfTraining)\r\n+\r\n+\t\t\t\treturn tf.nn.max_pool(bn, ksize=[1, dsfX[index], dsfX[index], 1], strides=[1, dsfX[index], dsfX[index], 1], padding=\'SAME\',name=\'maxpool\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# bottom layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'lb\'):\r\n+\t\t\tlbWeights1 = tf.Variable(tf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[UNet2D.hp[\'nLayers\']], nOutX[UNet2D.hp[\'nLayers\']+1]], stddev=stdDev0),name=\'kernel1\')\r\n+\t\t\tdef lb(hidden):\r\n+\t\t\t\treturn tf.nn.relu(tf.nn.conv2d(hidden, lbWeights1, strides=[1, 1, 1, 1], padding=\'SAME\'),name=\'conv\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'downsampling\'):    \r\n+\t\t\tdsX = []\r\n+\t\t\tdsX.append(UNet2D.tfData)\r\n+\r\n+\t\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\t\tdsX.append(down_s'..b'ite(np.uint8(255*pm),\'%s/I%05d_PM.png\' % (outPMPath,i-j+k+1))\r\n+\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# clean-up\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\tsess.close()\r\n+\r\n+\tdef singleImageInferenceSetup(modelPath,gpuIndex):\r\n+\t\t#os.environ[\'CUDA_VISIBLE_DEVICES\']= \'%d\' % gpuIndex\r\n+\r\n+\t\tvariablesPath = pathjoin(modelPath,\'model.ckpt\')\r\n+\r\n+\t\thp = loadData(pathjoin(modelPath,\'hp.data\'))\r\n+\t\tUNet2D.setupWithHP(hp)\r\n+\r\n+\t\tUNet2D.DatasetMean = loadData(pathjoin(modelPath,\'datasetMean.data\'))\r\n+\t\tUNet2D.DatasetStDev = loadData(pathjoin(modelPath,\'datasetStDev.data\'))\r\n+\t\tprint(UNet2D.DatasetMean)\r\n+\t\tprint(UNet2D.DatasetStDev)\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# session\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\tsaver = tf.train.Saver()\r\n+\t\tUNet2D.Session = tf.Session(config=tf.ConfigProto(allow_soft_placement=True)) # config parameter needed to save variables when using GPU\r\n+\r\n+\t\tsaver.restore(UNet2D.Session, variablesPath)\r\n+\t\tprint("Model restored.")\r\n+\r\n+\tdef singleImageInferenceCleanup():\r\n+\t\tUNet2D.Session.close()\r\n+\r\n+\tdef singleImageInference(image,mode,pmIndex):\r\n+\t\tprint(\'Inference...\')\r\n+\r\n+\t\tbatchSize = UNet2D.hp[\'batchSize\']\r\n+\t\timSize = UNet2D.hp[\'imSize\']\r\n+\t\tnChannels = UNet2D.hp[\'nChannels\']\r\n+\r\n+\t\tPI2D.setup(image,imSize,int(imSize/8),mode)\r\n+\t\tPI2D.createOutput(nChannels)\r\n+\r\n+\t\tbatchData = np.zeros((batchSize,imSize,imSize,nChannels))\r\n+\t\tfor i in range(PI2D.NumPatches):\r\n+\t\t\tj = np.mod(i,batchSize)\r\n+\t\t\tbatchData[j,:,:,0] = (PI2D.getPatch(i)-UNet2D.DatasetMean)/UNet2D.DatasetStDev\r\n+\t\t\tif j == batchSize-1 or i == PI2D.NumPatches-1:\r\n+\t\t\t\toutput = UNet2D.Session.run(UNet2D.nn,feed_dict={UNet2D.tfData: batchData, UNet2D.tfTraining: 0})\r\n+\t\t\t\tfor k in range(j+1):\r\n+\t\t\t\t\tpm = output[k,:,:,pmIndex]\r\n+\t\t\t\t\tPI2D.patchOutput(i-j+k,pm)\r\n+\t\t\t\t\t# PI2D.patchOutput(i-j+k,normalize(imgradmag(PI2D.getPatch(i-j+k),1)))\r\n+\r\n+\t\treturn PI2D.getValidOutput()\r\n+\r\n+\r\n+if __name__ == \'__main__\':\r\n+\tlogPath = \'C://Users//Clarence//Documents//UNet code//TFLogs\'\r\n+\tmodelPath = \'D:\\\\LSP\\\\UNet\\\\tonsil20x1bin1chan\\\\TFModel - 3class 16 kernels 5ks 2 layers\'\r\n+\tpmPath = \'C://Users//Clarence//Documents//UNet code//TFProbMaps\'\r\n+\r\n+\r\n+\r\n+\tUNet2D.singleImageInferenceSetup(modelPath, 0)\r\n+\timagePath = \'D:\\\\LSP\\\\cycif\\\\testsets\'\r\n+\tsampleList = glob.glob(imagePath + \'//exemplar-001*\')\r\n+\tdapiChannel = 0\r\n+\tdsFactor = 1\r\n+\tfor iSample in sampleList:\r\n+\t\tfileList = glob.glob(iSample + \'//registration//*.tif\')\r\n+\t\tprint(fileList)\r\n+\t\tfor iFile in fileList:\r\n+\t\t\tfileName = os.path.basename(iFile)\r\n+\t\t\tfileNamePrefix = fileName.split(os.extsep, 1)\r\n+\t\t\tI = tifffile.imread(iFile, key=dapiChannel)\r\n+\t\t\trawI = I\r\n+\t\t\thsize = int((float(I.shape[0])*float(dsFactor)))\r\n+\t\t\tvsize = int((float(I.shape[1])*float(dsFactor)))\r\n+\t\t\tI = resize(I,(hsize,vsize))\r\n+\t\t\tI = im2double(sk.rescale_intensity(I, in_range=(np.min(I), np.max(I)), out_range=(0, 0.983)))\r\n+\t\t\trawI = im2double(rawI)/np.max(im2double(rawI))\r\n+\t\t\toutputPath = iSample + \'//prob_maps\'\r\n+\t\t\tif not os.path.exists(outputPath):\r\n+\t\t\t\tos.makedirs(outputPath)\r\n+\t\t\tK = np.zeros((2,rawI.shape[0],rawI.shape[1]))\r\n+\t\t\tcontours = UNet2D.singleImageInference(I,\'accumulate\',1)\r\n+\t\t\thsize = int((float(I.shape[0]) * float(1/dsFactor)))\r\n+\t\t\tvsize = int((float(I.shape[1]) * float(1/dsFactor)))\r\n+\t\t\tcontours = resize(contours, (rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tK[1,:,:] = rawI\r\n+\t\t\tK[0,:,:] = contours\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + \'//\' + fileNamePrefix[0] + \'_ContoursPM_\' + str(dapiChannel + 1) + \'.tif\')\r\n+\t\t\tdel K\r\n+\t\t\tK = np.zeros((1, rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tnuclei = UNet2D.singleImageInference(I,\'accumulate\',2)\r\n+\t\t\tnuclei = resize(nuclei, (rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tK[0, :, :] = nuclei\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + \'//\' + fileNamePrefix[0] + \'_NucleiPM_\' + str(dapiChannel + 1) + \'.tif\')\r\n+\t\t\tdel K\r\n+\tUNet2D.singleImageInferenceCleanup()\r\n+\r\n'
b
diff -r 000000000000 -r 6bec4fef6b2e batchUnMicst.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/batchUnMicst.py Fri Mar 12 00:17:29 2021 +0000
[
b'@@ -0,0 +1,588 @@\n+import numpy as np\r\n+from scipy import misc\r\n+import tensorflow as tf\r\n+import shutil\r\n+import scipy.io as sio\r\n+import os, fnmatch, PIL, glob\r\n+import skimage.exposure as sk\r\n+import argparse\r\n+\r\n+import sys\r\n+\r\n+sys.path.insert(0, \'C:\\\\Users\\\\Public\\\\Documents\\\\ImageScience\')\r\n+from toolbox.imtools import *\r\n+from toolbox.ftools import *\r\n+from toolbox.PartitionOfImage import PI2D\r\n+\r\n+\r\n+def concat3(lst):\r\n+\treturn tf.concat(lst, 3)\r\n+\r\n+\r\n+class UNet2D:\r\n+\thp = None  # hyper-parameters\r\n+\tnn = None  # network\r\n+\ttfTraining = None  # if training or not (to handle batch norm)\r\n+\ttfData = None  # data placeholder\r\n+\tSession = None\r\n+\tDatasetMean = 0\r\n+\tDatasetStDev = 0\r\n+\r\n+\tdef setupWithHP(hp):\r\n+\t\tUNet2D.setup(hp[\'imSize\'],\r\n+\t\t\t\t\t hp[\'nChannels\'],\r\n+\t\t\t\t\t hp[\'nClasses\'],\r\n+\t\t\t\t\t hp[\'nOut0\'],\r\n+\t\t\t\t\t hp[\'featMapsFact\'],\r\n+\t\t\t\t\t hp[\'downSampFact\'],\r\n+\t\t\t\t\t hp[\'ks\'],\r\n+\t\t\t\t\t hp[\'nExtraConvs\'],\r\n+\t\t\t\t\t hp[\'stdDev0\'],\r\n+\t\t\t\t\t hp[\'nLayers\'],\r\n+\t\t\t\t\t hp[\'batchSize\'])\r\n+\r\n+\tdef setup(imSize, nChannels, nClasses, nOut0, featMapsFact, downSampFact, kernelSize, nExtraConvs, stdDev0,\r\n+\t\t\t  nDownSampLayers, batchSize):\r\n+\t\tUNet2D.hp = {\'imSize\': imSize,\r\n+\t\t\t\t\t \'nClasses\': nClasses,\r\n+\t\t\t\t\t \'nChannels\': nChannels,\r\n+\t\t\t\t\t \'nExtraConvs\': nExtraConvs,\r\n+\t\t\t\t\t \'nLayers\': nDownSampLayers,\r\n+\t\t\t\t\t \'featMapsFact\': featMapsFact,\r\n+\t\t\t\t\t \'downSampFact\': downSampFact,\r\n+\t\t\t\t\t \'ks\': kernelSize,\r\n+\t\t\t\t\t \'nOut0\': nOut0,\r\n+\t\t\t\t\t \'stdDev0\': stdDev0,\r\n+\t\t\t\t\t \'batchSize\': batchSize}\r\n+\r\n+\t\tnOutX = [UNet2D.hp[\'nChannels\'], UNet2D.hp[\'nOut0\']]\r\n+\t\tdsfX = []\r\n+\t\tfor i in range(UNet2D.hp[\'nLayers\']):\r\n+\t\t\tnOutX.append(nOutX[-1] * UNet2D.hp[\'featMapsFact\'])\r\n+\t\t\tdsfX.append(UNet2D.hp[\'downSampFact\'])\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'placeholders\'):\r\n+\t\t\tUNet2D.tfTraining = tf.placeholder(tf.bool, name=\'training\')\r\n+\t\t\tUNet2D.tfData = tf.placeholder("float", shape=[None, UNet2D.hp[\'imSize\'], UNet2D.hp[\'imSize\'],\r\n+\t\t\t\t\t\t\t\t\t\t\t\t\t\t   UNet2D.hp[\'nChannels\']], name=\'data\')\r\n+\r\n+\t\tdef down_samp_layer(data, index):\r\n+\t\t\twith tf.name_scope(\'ld%d\' % index):\r\n+\t\t\t\tldXWeights1 = tf.Variable(\r\n+\t\t\t\t\ttf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index], nOutX[index + 1]],\r\n+\t\t\t\t\t\t\t\t\t\tstddev=stdDev0), name=\'kernel1\')\r\n+\t\t\t\tldXWeightsExtra = []\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tldXWeightsExtra.append(tf.Variable(\r\n+\t\t\t\t\t\ttf.truncated_normal([UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[index + 1], nOutX[index + 1]],\r\n+\t\t\t\t\t\t\t\t\t\t\tstddev=stdDev0), name=\'kernelExtra%d\' % i))\r\n+\r\n+\t\t\t\tc00 = tf.nn.conv2d(data, ldXWeights1, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\t\t\t\tfor i in range(nExtraConvs):\r\n+\t\t\t\t\tc00 = tf.nn.conv2d(tf.nn.relu(c00), ldXWeightsExtra[i], strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tldXWeightsShortcut = tf.Variable(\r\n+\t\t\t\t\ttf.truncated_normal([1, 1, nOutX[index], nOutX[index + 1]], stddev=stdDev0), name=\'shortcutWeights\')\r\n+\t\t\t\tshortcut = tf.nn.conv2d(data, ldXWeightsShortcut, strides=[1, 1, 1, 1], padding=\'SAME\')\r\n+\r\n+\t\t\t\tbn = tf.layers.batch_normalization(tf.nn.relu(c00 + shortcut), training=UNet2D.tfTraining)\r\n+\r\n+\t\t\t\treturn tf.nn.max_pool(bn, ksize=[1, dsfX[index], dsfX[index], 1],\r\n+\t\t\t\t\t\t\t\t\t  strides=[1, dsfX[index], dsfX[index], 1], padding=\'SAME\', name=\'maxpool\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# bottom layer\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\twith tf.name_scope(\'lb\'):\r\n+\t\t\tlbWeights1 = tf.Variable(tf.truncated_normal(\r\n+\t\t\t\t[UNet2D.hp[\'ks\'], UNet2D.hp[\'ks\'], nOutX[UNet2D.hp[\'nLayers\']], nOutX[UNet2D.hp[\'nLayers\'] + 1]],\r\n+\t\t\t\tstddev=stdDev0), name=\'kernel1\')\r\n+\r\n+\t\t\tdef lb(hidden):\r\n+\t\t\t\treturn tf.nn.relu(tf.nn.conv2d(hidden, lbWeights1, strides=[1, 1, 1, 1], padding=\'SAME\'), name=\'conv\')\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# downsampling\r\n+\t\t# ---------------------------------------------'..b' loadData(pathjoin(modelPath, \'datasetStDev.data\'))\r\n+\t\tprint(UNet2D.DatasetMean)\r\n+\t\tprint(UNet2D.DatasetStDev)\r\n+\r\n+\t\t# --------------------------------------------------\r\n+\t\t# session\r\n+\t\t# --------------------------------------------------\r\n+\r\n+\t\tsaver = tf.train.Saver()\r\n+\t\tUNet2D.Session = tf.Session(config=tf.ConfigProto(\r\n+\t\t\tallow_soft_placement=True))  # config parameter needed to save variables when using GPU\r\n+\r\n+\t\tsaver.restore(UNet2D.Session, variablesPath)\r\n+\t\tprint("Model restored.")\r\n+\r\n+\tdef singleImageInferenceCleanup():\r\n+\t\tUNet2D.Session.close()\r\n+\r\n+\tdef singleImageInference(image, mode, pmIndex):\r\n+\t\tprint(\'Inference...\')\r\n+\r\n+\t\tbatchSize = UNet2D.hp[\'batchSize\']\r\n+\t\timSize = UNet2D.hp[\'imSize\']\r\n+\t\tnChannels = UNet2D.hp[\'nChannels\']\r\n+\r\n+\t\tPI2D.setup(image, imSize, int(imSize / 8), mode)\r\n+\t\tPI2D.createOutput(nChannels)\r\n+\r\n+\t\tbatchData = np.zeros((batchSize, imSize, imSize, nChannels))\r\n+\t\tfor i in range(PI2D.NumPatches):\r\n+\t\t\tj = np.mod(i, batchSize)\r\n+\t\t\tbatchData[j, :, :, 0] = (PI2D.getPatch(i) - UNet2D.DatasetMean) / UNet2D.DatasetStDev\r\n+\t\t\tif j == batchSize - 1 or i == PI2D.NumPatches - 1:\r\n+\t\t\t\toutput = UNet2D.Session.run(UNet2D.nn, feed_dict={UNet2D.tfData: batchData, UNet2D.tfTraining: 0})\r\n+\t\t\t\tfor k in range(j + 1):\r\n+\t\t\t\t\tpm = output[k, :, :, pmIndex]\r\n+\t\t\t\t\tPI2D.patchOutput(i - j + k, pm)\r\n+\t\t\t# PI2D.patchOutput(i-j+k,normalize(imgradmag(PI2D.getPatch(i-j+k),1)))\r\n+\r\n+\t\treturn PI2D.getValidOutput()\r\n+\r\n+\r\n+if __name__ == \'__main__\':\r\n+\tparser = argparse.ArgumentParser()\r\n+\tparser.add_argument("imagePath", help="path to the .tif file")\r\n+\tparser.add_argument("--channel", help="channel to perform inference on", type=int, default=0)\r\n+\tparser.add_argument("--TMA", help="specify if TMA", action="store_true")\r\n+\tparser.add_argument("--scalingFactor", help="factor by which to increase/decrease image size by", type=float,\r\n+\t\t\t\t\t\tdefault=1)\r\n+\targs = parser.parse_args()\r\n+\r\n+\tlogPath = \'\'\r\n+\tmodelPath = \'D:\\\\LSP\\\\UNet\\\\tonsil20x1bin1chan\\\\TFModel - 3class 16 kernels 5ks 2 layers\'\r\n+\tpmPath = \'\'\r\n+\r\n+\tUNet2D.singleImageInferenceSetup(modelPath, 1)\r\n+\timagePath = args.imagePath\r\n+\tsampleList = glob.glob(imagePath + \'/exemplar*\')\r\n+\tdapiChannel = args.channel\r\n+\tdsFactor = args.scalingFactor\r\n+\tfor iSample in sampleList:\r\n+\t\tif args.TMA:\r\n+\t\t\tfileList = [x for x in glob.glob(iSample + \'\\\\dearray\\\\*.tif\') if x != (iSample + \'\\\\dearray\\\\TMA_MAP.tif\')]\r\n+\t\t\tprint(iSample)\r\n+\t\telse:\r\n+\t\t\tfileList = glob.glob(iSample + \'//registration//*ome.tif\')\r\n+\t\tprint(fileList)\r\n+\t\tfor iFile in fileList:\r\n+\t\t\tfileName = os.path.basename(iFile)\r\n+\t\t\tfileNamePrefix = fileName.split(os.extsep, 1)\r\n+\t\t\tI = tifffile.imread(iFile, key=dapiChannel)\r\n+\t\t\trawI = I\r\n+\t\t\thsize = int((float(I.shape[0]) * float(dsFactor)))\r\n+\t\t\tvsize = int((float(I.shape[1]) * float(dsFactor)))\r\n+\t\t\tI = resize(I, (hsize, vsize))\r\n+\t\t\tI = im2double(sk.rescale_intensity(I, in_range=(np.min(I), np.max(I)), out_range=(0, 0.983)))\r\n+\t\t\trawI = im2double(rawI) / np.max(im2double(rawI))\r\n+\t\t\toutputPath = iSample + \'//prob_maps\'\r\n+\t\t\tif not os.path.exists(outputPath):\r\n+\t\t\t\tos.makedirs(outputPath)\r\n+\t\t\tK = np.zeros((2, rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tcontours = UNet2D.singleImageInference(I, \'accumulate\', 1)\r\n+\t\t\thsize = int((float(I.shape[0]) * float(1 / dsFactor)))\r\n+\t\t\tvsize = int((float(I.shape[1]) * float(1 / dsFactor)))\r\n+\t\t\tcontours = resize(contours, (rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tK[1, :, :] = rawI\r\n+\t\t\tK[0, :, :] = contours\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + \'//\' + fileNamePrefix[0] + \'_ContoursPM_\' + str(dapiChannel + 1) + \'.tif\')\r\n+\t\t\tdel K\r\n+\t\t\tK = np.zeros((1, rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tnuclei = UNet2D.singleImageInference(I, \'accumulate\', 2)\r\n+\t\t\tnuclei = resize(nuclei, (rawI.shape[0], rawI.shape[1]))\r\n+\t\t\tK[0, :, :] = nuclei\r\n+\t\t\ttifwrite(np.uint8(255 * K),\r\n+\t\t\t\t\t outputPath + \'//\' + fileNamePrefix[0] + \'_NucleiPM_\' + str(dapiChannel + 1) + \'.tif\')\r\n+\t\t\tdel K\r\n+\tUNet2D.singleImageInferenceCleanup()\r\n'
b
diff -r 000000000000 -r 6bec4fef6b2e macros.xml
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/macros.xml Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,28 @@
+<?xml version="1.0"?>
+<macros>
+    <xml name="requirements">
+        <requirements>
+            <requirement type="package" version="3.7">python</requirement>
+            <requirement type="package" version="1.15.0">tensorflow</requirement>
+            <requirement type="package" version="1.15.1">tensorflow-estimator</requirement>
+            <requirement type="package">cudnn</requirement>
+            <requirement type="package" version="10.0">cudatoolkit</requirement>
+            <requirement type="package" version="0.17.2">scikit-image</requirement>
+            <requirement type="package" version="1.4.1">scipy</requirement>
+            <requirement type="package" version="2020.7.24">tifffile</requirement>
+            <requirement type="package" version="2019.7.2">czifile</requirement>
+            <requirement type="package" version="3.2.3">nd2reader</requirement>
+        </requirements>
+    </xml>
+
+    <xml name="version_cmd">
+        <version_command>echo @VERSION@</version_command>
+    </xml>
+    <xml name="citations">
+        <citations>
+        </citations>
+    </xml>
+
+    <token name="@VERSION@">3.1.1</token>
+    <token name="@CMD_BEGIN@">python ${__tool_directory__}/UnMicst.py</token>
+</macros>
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/CytoplasmIncell/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "D:\\Dan\\CytoplasmIncell\\model.ckpt"
+all_model_checkpoint_paths: "D:\\Dan\\CytoplasmIncell\\model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/datasetMean.data
b
Binary file models/CytoplasmIncell/datasetMean.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/datasetStDev.data
b
Binary file models/CytoplasmIncell/datasetStDev.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/hp.data
b
Binary file models/CytoplasmIncell/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/model.ckpt.data-00000-of-00001
b
Binary file models/CytoplasmIncell/model.ckpt.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/model.ckpt.index
b
Binary file models/CytoplasmIncell/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell/model.ckpt.meta
b
Binary file models/CytoplasmIncell/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/datasetMean.data
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/CytoplasmIncell2/datasetMean.data Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,1 @@
+�G?���Q�.
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/datasetStDev.data
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/CytoplasmIncell2/datasetStDev.data Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,1 @@
+�G?���Q�.
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/hp.data
b
Binary file models/CytoplasmIncell2/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/model.ckpt.data-00000-of-00001
b
Binary file models/CytoplasmIncell2/model.ckpt.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/model.ckpt.index
b
Binary file models/CytoplasmIncell2/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmIncell2/model.ckpt.meta
b
Binary file models/CytoplasmIncell2/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/CytoplasmZeissNikon/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "D:\\Dan\\CytoplasmZeissNikon\\model.ckpt"
+all_model_checkpoint_paths: "D:\\Dan\\CytoplasmZeissNikon\\model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/datasetMean.data
b
Binary file models/CytoplasmZeissNikon/datasetMean.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/datasetStDev.data
b
Binary file models/CytoplasmZeissNikon/datasetStDev.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/hp.data
b
Binary file models/CytoplasmZeissNikon/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/model.ckpt.data-00000-of-00001
b
Binary file models/CytoplasmZeissNikon/model.ckpt.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/model.ckpt.index
b
Binary file models/CytoplasmZeissNikon/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/CytoplasmZeissNikon/model.ckpt.meta
b
Binary file models/CytoplasmZeissNikon/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/mousenucleiDAPI/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "D:\\Olesja\\UNet\\nuclei20x2bin1chan 3layers ks3 bs16 20input\\model.ckpt"
+all_model_checkpoint_paths: "D:\\Olesja\\UNet\\nuclei20x2bin1chan 3layers ks3 bs16 20input\\model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/datasetMean.data
b
Binary file models/mousenucleiDAPI/datasetMean.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/datasetStDev.data
b
Binary file models/mousenucleiDAPI/datasetStDev.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/hp.data
b
Binary file models/mousenucleiDAPI/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/model.ckpt.data-00000-of-00001
b
Binary file models/mousenucleiDAPI/model.ckpt.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/model.ckpt.index
b
Binary file models/mousenucleiDAPI/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/model.ckpt.meta
b
Binary file models/mousenucleiDAPI/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/nuclei20x2bin1chan.data-00000-of-00001
b
Binary file models/mousenucleiDAPI/nuclei20x2bin1chan.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/nuclei20x2bin1chan.index
b
Binary file models/mousenucleiDAPI/nuclei20x2bin1chan.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/mousenucleiDAPI/nuclei20x2bin1chan.meta
b
Binary file models/mousenucleiDAPI/nuclei20x2bin1chan.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPI/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "D:\\LSP\\UNet\\tonsil20x1bin1chan\\TFModel\\model.ckpt"
+all_model_checkpoint_paths: "D:\\LSP\\UNet\\tonsil20x1bin1chan\\TFModel\\model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/datasetMean.data
b
Binary file models/nucleiDAPI/datasetMean.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/datasetStDev.data
b
Binary file models/nucleiDAPI/datasetStDev.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/hp.data
b
Binary file models/nucleiDAPI/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/model.ckpt.data-00000-of-00001
b
Binary file models/nucleiDAPI/model.ckpt.data-00000-of-00001 has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/model.ckpt.index
b
Binary file models/nucleiDAPI/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI/model.ckpt.meta
b
Binary file models/nucleiDAPI/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPI1-5/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "D:\\LSP\\UNet\\TuuliaLPTBdapiTFv2\\model.ckpt"
+all_model_checkpoint_paths: "D:\\LSP\\UNet\\TuuliaLPTBdapiTFv2\\model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/datasetMean.data
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPI1-5/datasetMean.data Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,1 @@
+�G?�\(��.
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/datasetStDev.data
b
Binary file models/nucleiDAPI1-5/datasetStDev.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/hp.data
b
Binary file models/nucleiDAPI1-5/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/model.ckpt.index
b
Binary file models/nucleiDAPI1-5/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPI1-5/model.ckpt.meta
b
Binary file models/nucleiDAPI1-5/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/checkpoint
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPILAMIN/checkpoint Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,2 @@
+model_checkpoint_path: "/home/cy101/files/CellBiology/IDAC/Clarence/LSP/UNet models/LPTCdapilamin5-36/model.ckpt"
+all_model_checkpoint_paths: "/home/cy101/files/CellBiology/IDAC/Clarence/LSP/UNet models/LPTCdapilamin5-36/model.ckpt"
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/datasetMean.data
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPILAMIN/datasetMean.data Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,3 @@
+�G?�
+=p��
+.
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/datasetStDev.data
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/models/nucleiDAPILAMIN/datasetStDev.data Fri Mar 12 00:17:29 2021 +0000
b
@@ -0,0 +1,1 @@
+�G?�\(��.
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/hp.data
b
Binary file models/nucleiDAPILAMIN/hp.data has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/model.ckpt.index
b
Binary file models/nucleiDAPILAMIN/model.ckpt.index has changed
b
diff -r 000000000000 -r 6bec4fef6b2e models/nucleiDAPILAMIN/model.ckpt.meta
b
Binary file models/nucleiDAPILAMIN/model.ckpt.meta has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/GPUselect.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/toolbox/GPUselect.py Fri Mar 12 00:17:29 2021 +0000
[
@@ -0,0 +1,19 @@
+import subprocess, re
+import numpy as np
+
+def pick_gpu_lowest_memory():
+    output = subprocess.Popen("nvidia-smi", stdout=subprocess.PIPE, shell=True).communicate()[0]
+    output=output.decode("ascii")
+    gpu_output = output[output.find("Memory-Usage"):]
+        # lines of the form
+        # |    0      8734    C   python                                       11705MiB |
+    memory_regex = re.compile(r"[|]\s+?\D+?.+[ ](?P<gpu_memory>\d+)MiB /")
+    rows = gpu_output.split("\n")
+    result=[]
+    for row in gpu_output.split("\n"):
+        m = memory_regex.search(row)
+        if not m:
+            continue
+        gpu_memory = int(m.group("gpu_memory"))
+        result.append(gpu_memory)
+    return np.argsort(result)[0]
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/PartitionOfImage.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/toolbox/PartitionOfImage.py Fri Mar 12 00:17:29 2021 +0000
[
b"@@ -0,0 +1,305 @@\n+import numpy as np\n+from toolbox.imtools import *\n+# from toolbox.ftools import *\n+# import sys\n+\n+class PI2D:\n+    Image = None\n+    PaddedImage = None\n+    PatchSize = 128\n+    Margin = 14\n+    SubPatchSize = 100\n+    PC = None # patch coordinates\n+    NumPatches = 0\n+    Output = None\n+    Count = None\n+    NR = None\n+    NC = None\n+    NRPI = None\n+    NCPI = None\n+    Mode = None\n+    W = None\n+\n+    def setup(image,patchSize,margin,mode):\n+        PI2D.Image = image\n+        PI2D.PatchSize = patchSize\n+        PI2D.Margin = margin\n+        subPatchSize = patchSize-2*margin\n+        PI2D.SubPatchSize = subPatchSize\n+\n+        W = np.ones((patchSize,patchSize))\n+        W[[0,-1],:] = 0\n+        W[:,[0,-1]] = 0\n+        for i in range(1,2*margin):\n+            v = i/(2*margin)\n+            W[i,i:-i] = v\n+            W[-i-1,i:-i] = v\n+            W[i:-i,i] = v\n+            W[i:-i,-i-1] = v\n+        PI2D.W = W\n+\n+        if len(image.shape) == 2:\n+            nr,nc = image.shape\n+        elif len(image.shape) == 3: # multi-channel image\n+            nz,nr,nc = image.shape\n+\n+        PI2D.NR = nr\n+        PI2D.NC = nc\n+\n+        npr = int(np.ceil(nr/subPatchSize)) # number of patch rows\n+        npc = int(np.ceil(nc/subPatchSize)) # number of patch cols\n+\n+        nrpi = npr*subPatchSize+2*margin # number of rows in padded image \n+        ncpi = npc*subPatchSize+2*margin # number of cols in padded image \n+\n+        PI2D.NRPI = nrpi\n+        PI2D.NCPI = ncpi\n+\n+        if len(image.shape) == 2:\n+            PI2D.PaddedImage = np.zeros((nrpi,ncpi))\n+            PI2D.PaddedImage[margin:margin+nr,margin:margin+nc] = image\n+        elif len(image.shape) == 3:\n+            PI2D.PaddedImage = np.zeros((nz,nrpi,ncpi))\n+            PI2D.PaddedImage[:,margin:margin+nr,margin:margin+nc] = image\n+\n+        PI2D.PC = [] # patch coordinates [r0,r1,c0,c1]\n+        for i in range(npr):\n+            r0 = i*subPatchSize\n+            r1 = r0+patchSize\n+            for j in range(npc):\n+                c0 = j*subPatchSize\n+                c1 = c0+patchSize\n+                PI2D.PC.append([r0,r1,c0,c1])\n+\n+        PI2D.NumPatches = len(PI2D.PC)\n+        PI2D.Mode = mode # 'replace' or 'accumulate'\n+\n+    def getPatch(i):\n+        r0,r1,c0,c1 = PI2D.PC[i]\n+        if len(PI2D.PaddedImage.shape) == 2:\n+            return PI2D.PaddedImage[r0:r1,c0:c1]\n+        if len(PI2D.PaddedImage.shape) == 3:\n+            return PI2D.PaddedImage[:,r0:r1,c0:c1]\n+\n+    def createOutput(nChannels):\n+        if nChannels == 1:\n+            PI2D.Output = np.zeros((PI2D.NRPI,PI2D.NCPI),np.float16)\n+        else:\n+            PI2D.Output = np.zeros((nChannels,PI2D.NRPI,PI2D.NCPI),np.float16)\n+        if PI2D.Mode == 'accumulate':\n+            PI2D.Count = np.zeros((PI2D.NRPI,PI2D.NCPI),np.float16)\n+\n+    def patchOutput(i,P):\n+        r0,r1,c0,c1 = PI2D.PC[i]\n+        if PI2D.Mode == 'accumulate':\n+            PI2D.Count[r0:r1,c0:c1] += PI2D.W\n+        if len(P.shape) == 2:\n+            if PI2D.Mode == 'accumulate':\n+                PI2D.Output[r0:r1,c0:c1] += np.multiply(P,PI2D.W)\n+            elif PI2D.Mode == 'replace':\n+                PI2D.Output[r0:r1,c0:c1] = P\n+        elif len(P.shape) == 3:\n+            if PI2D.Mode == 'accumulate':\n+                for i in range(P.shape[0]):\n+                    PI2D.Output[i,r0:r1,c0:c1] += np.multiply(P[i,:,:],PI2D.W)\n+            elif PI2D.Mode == 'replace':\n+                PI2D.Output[:,r0:r1,c0:c1] = P\n+\n+    def getValidOutput():\n+        margin = PI2D.Margin\n+        nr, nc = PI2D.NR, PI2D.NC\n+        if PI2D.Mode == 'accumulate':\n+            C = PI2D.Count[margin:margin+nr,margin:margin+nc]\n+        if len(PI2D.Output.shape) == 2:\n+            if PI2D.Mode == 'accumulate':\n+                return np.divide(PI2D.Output[margin:margin+nr,margin:margin+nc],C)\n+            if PI2D.Mode == 'replace':\n+                return PI2D.Output[margin:margin+nr,margin:margin+nc]\n+        if len(PI2D.Output.sha"..b"n padded image \n+        nzpi = npz*subPatchSize+2*margin # number of plns in padded image \n+\n+        PI3D.NRPI = nrpi\n+        PI3D.NCPI = ncpi\n+        PI3D.NZPI = nzpi\n+\n+        if len(image.shape) == 3:\n+            PI3D.PaddedImage = np.zeros((nzpi,nrpi,ncpi))\n+            PI3D.PaddedImage[margin:margin+nz,margin:margin+nr,margin:margin+nc] = image\n+        elif len(image.shape) == 4:\n+            PI3D.PaddedImage = np.zeros((nzpi,nw,nrpi,ncpi))\n+            PI3D.PaddedImage[margin:margin+nz,:,margin:margin+nr,margin:margin+nc] = image\n+\n+        PI3D.PC = [] # patch coordinates [z0,z1,r0,r1,c0,c1]\n+        for iZ in range(npz):\n+            z0 = iZ*subPatchSize\n+            z1 = z0+patchSize\n+            for i in range(npr):\n+                r0 = i*subPatchSize\n+                r1 = r0+patchSize\n+                for j in range(npc):\n+                    c0 = j*subPatchSize\n+                    c1 = c0+patchSize\n+                    PI3D.PC.append([z0,z1,r0,r1,c0,c1])\n+\n+        PI3D.NumPatches = len(PI3D.PC)\n+        PI3D.Mode = mode # 'replace' or 'accumulate'\n+\n+    def getPatch(i):\n+        z0,z1,r0,r1,c0,c1 = PI3D.PC[i]\n+        if len(PI3D.PaddedImage.shape) == 3:\n+            return PI3D.PaddedImage[z0:z1,r0:r1,c0:c1]\n+        if len(PI3D.PaddedImage.shape) == 4:\n+            return PI3D.PaddedImage[z0:z1,:,r0:r1,c0:c1]\n+\n+    def createOutput(nChannels):\n+        if nChannels == 1:\n+            PI3D.Output = np.zeros((PI3D.NZPI,PI3D.NRPI,PI3D.NCPI))\n+        else:\n+            PI3D.Output = np.zeros((PI3D.NZPI,nChannels,PI3D.NRPI,PI3D.NCPI))\n+        if PI3D.Mode == 'accumulate':\n+            PI3D.Count = np.zeros((PI3D.NZPI,PI3D.NRPI,PI3D.NCPI))\n+\n+    def patchOutput(i,P):\n+        z0,z1,r0,r1,c0,c1 = PI3D.PC[i]\n+        if PI3D.Mode == 'accumulate':\n+            PI3D.Count[z0:z1,r0:r1,c0:c1] += PI3D.W\n+        if len(P.shape) == 3:\n+            if PI3D.Mode == 'accumulate':\n+                PI3D.Output[z0:z1,r0:r1,c0:c1] += np.multiply(P,PI3D.W)\n+            elif PI3D.Mode == 'replace':\n+                PI3D.Output[z0:z1,r0:r1,c0:c1] = P\n+        elif len(P.shape) == 4:\n+            if PI3D.Mode == 'accumulate':\n+                for i in range(P.shape[1]):\n+                    PI3D.Output[z0:z1,i,r0:r1,c0:c1] += np.multiply(P[:,i,:,:],PI3D.W)\n+            elif PI3D.Mode == 'replace':\n+                PI3D.Output[z0:z1,:,r0:r1,c0:c1] = P\n+\n+    def getValidOutput():\n+        margin = PI3D.Margin\n+        nz, nr, nc = PI3D.NZ, PI3D.NR, PI3D.NC\n+        if PI3D.Mode == 'accumulate':\n+            C = PI3D.Count[margin:margin+nz,margin:margin+nr,margin:margin+nc]\n+        if len(PI3D.Output.shape) == 3:\n+            if PI3D.Mode == 'accumulate':\n+                return np.divide(PI3D.Output[margin:margin+nz,margin:margin+nr,margin:margin+nc],C)\n+            if PI3D.Mode == 'replace':\n+                return PI3D.Output[margin:margin+nz,margin:margin+nr,margin:margin+nc]\n+        if len(PI3D.Output.shape) == 4:\n+            if PI3D.Mode == 'accumulate':\n+                for i in range(PI3D.Output.shape[1]):\n+                    PI3D.Output[margin:margin+nz,i,margin:margin+nr,margin:margin+nc] = np.divide(PI3D.Output[margin:margin+nz,i,margin:margin+nr,margin:margin+nc],C)\n+            return PI3D.Output[margin:margin+nz,:,margin:margin+nr,margin:margin+nc]\n+\n+\n+    def demo():\n+        I = np.random.rand(128,128,128)\n+        PI3D.setup(I,64,4,'accumulate')\n+\n+        nChannels = 2\n+        PI3D.createOutput(nChannels)\n+\n+        for i in range(PI3D.NumPatches):\n+            P = PI3D.getPatch(i)\n+            Q = np.zeros((P.shape[0],nChannels,P.shape[1],P.shape[2]))\n+            for j in range(nChannels):\n+                Q[:,j,:,:] = P\n+            PI3D.patchOutput(i,Q)\n+\n+        J = PI3D.getValidOutput()\n+        J = J[:,0,:,:]\n+\n+        D = np.abs(I-J)\n+        print(np.max(D))\n+\n+        pI = I[64,:,:]\n+        pJ = J[64,:,:]\n+        pD = D[64,:,:]\n+\n+        K = cat(1,cat(1,pI,pJ),pD)\n+        imshow(K)\n+\n"
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/GPUselect.cpython-37.pyc
b
Binary file toolbox/__pycache__/GPUselect.cpython-37.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/PartitionOfImage.cpython-36.pyc
b
Binary file toolbox/__pycache__/PartitionOfImage.cpython-36.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/PartitionOfImage.cpython-37.pyc
b
Binary file toolbox/__pycache__/PartitionOfImage.cpython-37.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/__init__.cpython-36.pyc
b
Binary file toolbox/__pycache__/__init__.cpython-36.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/ftools.cpython-36.pyc
b
Binary file toolbox/__pycache__/ftools.cpython-36.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/ftools.cpython-37.pyc
b
Binary file toolbox/__pycache__/ftools.cpython-37.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/imtools.cpython-36.pyc
b
Binary file toolbox/__pycache__/imtools.cpython-36.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/__pycache__/imtools.cpython-37.pyc
b
Binary file toolbox/__pycache__/imtools.cpython-37.pyc has changed
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/ftools.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/toolbox/ftools.py Fri Mar 12 00:17:29 2021 +0000
[
@@ -0,0 +1,55 @@
+from os.path import *
+from os import listdir, makedirs, remove
+import pickle
+import shutil
+
+def fileparts(path): # path = file path
+    [p,f] = split(path)
+    [n,e] = splitext(f)
+    return [p,n,e]
+
+def listfiles(path,token): # path = folder path
+    l = []
+    for f in listdir(path):
+        fullPath = join(path,f)
+        if isfile(fullPath) and token in f:
+            l.append(fullPath)
+    l.sort()
+    return l
+
+def listsubdirs(path): # path = folder path
+    l = []
+    for f in listdir(path):
+        fullPath = join(path,f)
+        if isdir(fullPath):
+            l.append(fullPath)
+    l.sort()
+    return l
+
+def pathjoin(p,ne): # '/path/to/folder', 'name.extension' (or a subfolder)
+    return join(p,ne)
+
+def saveData(data,path):
+    print('saving data')
+    dataFile = open(path, 'wb')
+    pickle.dump(data, dataFile)
+
+def loadData(path):
+    print('loading data')
+    dataFile = open(path, 'rb')
+    return pickle.load(dataFile)
+
+def createFolderIfNonExistent(path):
+    if not exists(path): # from os.path
+        makedirs(path)
+
+def moveFile(fullPathSource,folderPathDestination):
+    [p,n,e] = fileparts(fullPathSource)
+    shutil.move(fullPathSource,pathjoin(folderPathDestination,n+e))
+
+def copyFile(fullPathSource,folderPathDestination):
+    [p,n,e] = fileparts(fullPathSource)
+    shutil.copy(fullPathSource,pathjoin(folderPathDestination,n+e))
+
+def removeFile(path):
+    remove(path)
\ No newline at end of file
b
diff -r 000000000000 -r 6bec4fef6b2e toolbox/imtools.py
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/toolbox/imtools.py Fri Mar 12 00:17:29 2021 +0000
[
b"@@ -0,0 +1,312 @@\n+import matplotlib.pyplot as plt\r\n+import tifffile\r\n+import os\r\n+import numpy as np\r\n+from skimage import io as skio\r\n+from scipy.ndimage import *\r\n+from skimage.morphology import *\r\n+from skimage.transform import resize\r\n+\r\n+def tifread(path):\r\n+    return tifffile.imread(path)\r\n+\r\n+def tifwrite(I,path):\r\n+    tifffile.imsave(path, I)\r\n+\r\n+def imshow(I,**kwargs):\r\n+    if not kwargs:\r\n+        plt.imshow(I,cmap='gray')\r\n+    else:\r\n+        plt.imshow(I,**kwargs)\r\n+        \r\n+    plt.axis('off')\r\n+    plt.show()\r\n+\r\n+def imshowlist(L,**kwargs):\r\n+    n = len(L)\r\n+    for i in range(n):\r\n+        plt.subplot(1, n, i+1)\r\n+        if not kwargs:\r\n+            plt.imshow(L[i],cmap='gray')\r\n+        else:\r\n+            plt.imshow(L[i],**kwargs)\r\n+        plt.axis('off')\r\n+    plt.show()\r\n+\r\n+def imread(path):\r\n+    return skio.imread(path)\r\n+\r\n+def imwrite(I,path):\r\n+    skio.imsave(path,I)\r\n+\r\n+def im2double(I):\r\n+    if I.dtype == 'uint16':\r\n+        return I.astype('float64')/65535\r\n+    elif I.dtype == 'uint8':\r\n+        return I.astype('float64')/255\r\n+    elif I.dtype == 'float32':\r\n+        return I.astype('float64')\r\n+    elif I.dtype == 'float64':\r\n+        return I\r\n+    else:\r\n+        print('returned original image type: ', I.dtype)\r\n+        return I\r\n+\r\n+def size(I):\r\n+    return list(I.shape)\r\n+\r\n+def imresizeDouble(I,sizeOut): # input and output are double\r\n+    return resize(I,(sizeOut[0],sizeOut[1]),mode='reflect')\r\n+\r\n+def imresize3Double(I,sizeOut): # input and output are double\r\n+    return resize(I,(sizeOut[0],sizeOut[1],sizeOut[2]),mode='reflect')\r\n+\r\n+def imresizeUInt8(I,sizeOut): # input and output are UInt8\r\n+    return np.uint8(resize(I.astype(float),(sizeOut[0],sizeOut[1]),mode='reflect',order=0))\r\n+\r\n+def imresize3UInt8(I,sizeOut): # input and output are UInt8\r\n+    return np.uint8(resize(I.astype(float),(sizeOut[0],sizeOut[1],sizeOut[2]),mode='reflect',order=0))\r\n+\r\n+def normalize(I):\r\n+    m = np.min(I)\r\n+    M = np.max(I)\r\n+    if M > m:\r\n+        return (I-m)/(M-m)\r\n+    else:\r\n+        return I\r\n+\r\n+def snormalize(I):\r\n+    m = np.mean(I)\r\n+    s = np.std(I)\r\n+    if s > 0:\r\n+        return (I-m)/s\r\n+    else:\r\n+        return I\r\n+\r\n+def cat(a,I,J):\r\n+    return np.concatenate((I,J),axis=a)\r\n+\r\n+def imerode(I,r):\r\n+    return binary_erosion(I, disk(r))\r\n+\r\n+def imdilate(I,r):\r\n+    return binary_dilation(I, disk(r))\r\n+\r\n+def imerode3(I,r):\r\n+    return morphology.binary_erosion(I, ball(r))\r\n+\r\n+def imdilate3(I,r):\r\n+    return morphology.binary_dilation(I, ball(r))\r\n+\r\n+def sphericalStructuralElement(imShape,fRadius):\r\n+    if len(imShape) == 2:\r\n+        return disk(fRadius,dtype=float)\r\n+    if len(imShape) == 3:\r\n+        return ball(fRadius,dtype=float)\r\n+\r\n+def medfilt(I,filterRadius):\r\n+    return median_filter(I,footprint=sphericalStructuralElement(I.shape,filterRadius))\r\n+\r\n+def maxfilt(I,filterRadius):\r\n+    return maximum_filter(I,footprint=sphericalStructuralElement(I.shape,filterRadius))\r\n+\r\n+def minfilt(I,filterRadius):\r\n+    return minimum_filter(I,footprint=sphericalStructuralElement(I.shape,filterRadius))\r\n+\r\n+def ptlfilt(I,percentile,filterRadius):\r\n+    return percentile_filter(I,percentile,footprint=sphericalStructuralElement(I.shape,filterRadius))\r\n+\r\n+def imgaussfilt(I,sigma,**kwargs):\r\n+    return gaussian_filter(I,sigma,**kwargs)\r\n+\r\n+def imlogfilt(I,sigma,**kwargs):\r\n+    return -gaussian_laplace(I,sigma,**kwargs)\r\n+\r\n+def imgradmag(I,sigma):\r\n+    if len(I.shape) == 2:\r\n+        dx = imgaussfilt(I,sigma,order=[0,1])\r\n+        dy = imgaussfilt(I,sigma,order=[1,0])\r\n+        return np.sqrt(dx**2+dy**2)\r\n+    if len(I.shape) == 3:\r\n+        dx = imgaussfilt(I,sigma,order=[0,0,1])\r\n+        dy = imgaussfilt(I,sigma,order=[0,1,0])\r\n+        dz = imgaussfilt(I,sigma,order=[1,0,0])\r\n+        return np.sqrt(dx**2+dy**2+dz**2)\r\n+\r\n+def localstats(I,radius,justfeatnames=False):\r\n+    ptls = [10,30,50,70,90]\r\n+    featNames = []\r\n+    for i in range(len(ptls)):\r\n+ "..b":,:,nDerivativesPerSigma*i   ] = imgaussfilt(I,sigma)\r\n+        D[:,:,:,nDerivativesPerSigma*i+1 ] = dx\r\n+        D[:,:,:,nDerivativesPerSigma*i+2 ] = dy\r\n+        D[:,:,:,nDerivativesPerSigma*i+3 ] = dz\r\n+        D[:,:,:,nDerivativesPerSigma*i+4 ] = dxx\r\n+        D[:,:,:,nDerivativesPerSigma*i+5 ] = imgaussfilt(I,sigma,order=[0,1,1])\r\n+        D[:,:,:,nDerivativesPerSigma*i+6 ] = imgaussfilt(I,sigma,order=[1,0,1])\r\n+        D[:,:,:,nDerivativesPerSigma*i+7 ] = dyy\r\n+        D[:,:,:,nDerivativesPerSigma*i+8 ] = imgaussfilt(I,sigma,order=[1,1,0])\r\n+        D[:,:,:,nDerivativesPerSigma*i+9 ] = dzz\r\n+        D[:,:,:,nDerivativesPerSigma*i+10] = np.sqrt(dx**2+dy**2+dz**2)\r\n+        D[:,:,:,nDerivativesPerSigma*i+11] = np.sqrt(dxx**2+dyy**2+dzz**2)\r\n+\r\n+        # D[:,:,:,nDerivativesPerSigma*i   ] = imgaussfilt(I,sigma)\r\n+        # D[:,:,:,nDerivativesPerSigma*i+1 ] = np.sqrt(dx**2+dy**2+dz**2)\r\n+        # D[:,:,:,nDerivativesPerSigma*i+2 ] = np.sqrt(dxx**2+dyy**2+dzz**2)\r\n+    return D\r\n+    # derivatives are indexed by the last dimension, which is good for ML features but not for visualization,\r\n+    # in which case the expected dimensions are [plane,y(row),x(col)]; to obtain that ordering, do\r\n+    # D = np.moveaxis(D,[2,0,1],[0,1,2])\r\n+\r\n+def imfeatures(I=[],sigmaDeriv=1,sigmaLoG=1,locStatsRad=0,justfeatnames=False):\r\n+    if type(sigmaDeriv) is not list:\r\n+        sigmaDeriv = [sigmaDeriv]\r\n+    if type(sigmaLoG) is not list:\r\n+        sigmaLoG = [sigmaLoG]\r\n+    derivFeatNames = imderivatives([],sigmaDeriv,justfeatnames=True)\r\n+    nLoGFeats = len(sigmaLoG)\r\n+    locStatsFeatNames = []\r\n+    if locStatsRad > 1:\r\n+        locStatsFeatNames = localstats([],locStatsRad,justfeatnames=True)\r\n+    nLocStatsFeats = len(locStatsFeatNames)\r\n+    if justfeatnames == True:\r\n+        featNames = derivFeatNames\r\n+        for i in range(nLoGFeats):\r\n+            featNames.append('logSigma%d' % sigmaLoG[i])\r\n+        for i in range(nLocStatsFeats):\r\n+            featNames.append(locStatsFeatNames[i])\r\n+        return featNames\r\n+    nDerivFeats = len(derivFeatNames)\r\n+    nFeatures = nDerivFeats+nLoGFeats+nLocStatsFeats\r\n+    sI = size(I)\r\n+    F = np.zeros((sI[0],sI[1],nFeatures))\r\n+    F[:,:,:nDerivFeats] = imderivatives(I,sigmaDeriv)\r\n+    for i in range(nLoGFeats):\r\n+        F[:,:,nDerivFeats+i] = imlogfilt(I,sigmaLoG[i])\r\n+    if locStatsRad > 1:\r\n+        F[:,:,nDerivFeats+nLoGFeats:] = localstats(I,locStatsRad)\r\n+    return F\r\n+\r\n+def imfeatures3(I=[],sigmaDeriv=2,sigmaLoG=2,locStatsRad=0,justfeatnames=False):\r\n+    if type(sigmaDeriv) is not list:\r\n+        sigmaDeriv = [sigmaDeriv]\r\n+    if type(sigmaLoG) is not list:\r\n+        sigmaLoG = [sigmaLoG]\r\n+    derivFeatNames = imderivatives3([],sigmaDeriv,justfeatnames=True)\r\n+    nLoGFeats = len(sigmaLoG)\r\n+    locStatsFeatNames = []\r\n+    if locStatsRad > 1:\r\n+        locStatsFeatNames = localstats3([],locStatsRad,justfeatnames=True)\r\n+    nLocStatsFeats = len(locStatsFeatNames)\r\n+    if justfeatnames == True:\r\n+        featNames = derivFeatNames\r\n+        for i in range(nLoGFeats):\r\n+            featNames.append('logSigma%d' % sigmaLoG[i])\r\n+        for i in range(nLocStatsFeats):\r\n+            featNames.append(locStatsFeatNames[i])\r\n+        return featNames\r\n+    nDerivFeats = len(derivFeatNames)\r\n+    nFeatures = nDerivFeats+nLoGFeats+nLocStatsFeats\r\n+    sI = size(I)\r\n+    F = np.zeros((sI[0],sI[1],sI[2],nFeatures))\r\n+    F[:,:,:,:nDerivFeats] = imderivatives3(I,sigmaDeriv)\r\n+    for i in range(nLoGFeats):\r\n+        F[:,:,:,nDerivFeats+i] = imlogfilt(I,sigmaLoG[i])\r\n+    if locStatsRad > 1:\r\n+        F[:,:,:,nDerivFeats+nLoGFeats:] = localstats3(I,locStatsRad)\r\n+    return F\r\n+\r\n+def stack2list(S):\r\n+    L = []\r\n+    for i in range(size(S)[2]):\r\n+        L.append(S[:,:,i])\r\n+    return L\r\n+\r\n+def thrsegment(I,wsBlr,wsThr): # basic threshold segmentation\r\n+    G = imgaussfilt(I,sigma=(1-wsBlr)+wsBlr*5) # min 1, max 5\r\n+    M = G > wsThr\r\n+    return M\n\\ No newline at end of file\n"
b
diff -r 000000000000 -r 6bec4fef6b2e unmicst.xml
--- /dev/null Thu Jan 01 00:00:00 1970 +0000
+++ b/unmicst.xml Fri Mar 12 00:17:29 2021 +0000
[
@@ -0,0 +1,104 @@
+<tool id="unmicst" name="UnMicst" version="@VERSION@.1" profile="17.09">
+    <description>UNet Model for Identifying Cells and Segmenting Tissue</description>
+    <macros>
+        <import>macros.xml</import>
+    </macros>

+    <expand macro="requirements"/>
+    @VERSION_CMD@
+
+    <command detect_errors="exit_code"><![CDATA[
+    #set $typeCorrected = str($image.name).replace('.ome.tiff','').replace('.ome.tif','').replace('.tiff','').replace('.tif','')+'.ome.tif'
+
+    ln -s $image '$typeCorrected';
+
+    @CMD_BEGIN@ '$typeCorrected'
+    
+    #if $stackoutput
+    --stackOutput
+    #end if
+
+    --outputPath `pwd`
+    --channel $channel
+    --model $model
+    --mean $mean
+    --std $stdev
+    --scalingFactor $scalingfactor;
+
+    ## Move files to different files for from_work_dir differentiation
+    #if $stackoutput
+    mv *Probabilities*.tif Probabilities.tif;
+    mv *Preview*.tif Preview.tif
+    #else
+    mv *ContoursPM*.tif ContoursPM.tif;
+    mv *NucleiPM*.tif NucleiPM.tif
+    #end if
+    ]]></command>
+
+    <inputs>
+        <param name="image" type="data" format="tiff" label="Registered TIFF"/>
+        <param name="model" type="select" label="Model">
+            <option value="nucleiDAPI">nucleiDAPI</option>
+            <option value="mousenucleiDAPI">mousenucleiDAPI</option>
+            <option value="CytoplasmIncell">CytoplasmIncell</option>
+            <option value="CytoplasmZeissNikon">CytoplasmZeissNikon</option>
+        </param>
+        <param name="mean" type="float" value="-1" label="Mean (-1 for model default)"/>
+        <param name="stdev" type="float" value="-1" label="Standard Deviation (-1 for model default)"/>
+        <param name="channel" type="integer" value="0" label="Channel to perform inference on"/>
+        <param name="stackoutput" type="boolean"  label="Stack probability map outputs"/>
+        <param name="scalingfactor" type="float" value="1.0" label="Factor to scale by"/>
+    </inputs>
+
+    <outputs>
+        <data format="tiff" name="previews" from_work_dir="Preview.tif" label="${tool.name} on ${on_string}: Preview">
+            <filter>stackoutput</filter>
+        </data>
+        <data format="tiff" name="probabilities" from_work_dir="Probabilities.tif" label="${tool.name} on ${on_string}: Probabilities">
+            <filter>stackoutput</filter>
+        </data>
+        <data format="tiff" name="contours" from_work_dir="ContoursPM.tif" label="${tool.name} on ${on_string}: ContoursPM">
+            <filter>not stackoutput</filter>
+        </data>
+        <data format="tiff" name="nuclei" from_work_dir="NucleiPM.tif" label="${tool.name} on ${on_string}: NucleiPM">
+            <filter>not stackoutput</filter>
+        </data>
+    </outputs>
+    <help><![CDATA[
+UnMicst - UNet Model for Identifying Cells and Segmenting Tissue
+Image Preprocessing
+Images can be preprocessed by inferring nuclei contours via a pretrained UNet model. The model is trained on 3 classes : background, nuclei contours and nuclei centers. The resulting probability maps can then be loaded into any modular segmentation pipeline that may use (but not limited to) a marker controlled watershed algorithm.
+
+The only input file is: an .ome.tif or .tif (preferably flat field corrected, minimal saturated pixels, and in focus. The model is trained on images acquired at 20x with binning 2x2 or a pixel size of 0.65 microns/px. If your settings differ, you can upsample/downsample to some extent.
+
+Running as a Docker container
+
+The docker image is distributed through Dockerhub and includes UnMicst with all of its dependencies. Parallel images with and without gpu support are available.
+
+docker pull labsyspharm/unmicst:latest
+docker pull labsyspharm/unmicst:latest-gpu
+Instatiate a container and mount the input directory containing your image.
+
+docker run -it --runtime=nvidia -v /path/to/data:/data labsyspharm/unmicst:latest-gpu bash
+When using the CPU-only image, --runtime=nvidia can be omitted:
+
+docker run -it -v /path/to/data:/data labsyspharm/unmicst:latest bash
+UnMicst resides in the /app directory inside the container:
+
+root@0ea0cdc46c8f:/# python app/UnMicst.py /data/input/my.tif --outputPath /data/results
+Running in a Conda environment
+
+If Docker is not available on your system, you can run the tool locally by creating a Conda environment. Ensure conda is installed on your system, then clone the repo and use conda.yml to create the environment.
+
+git clone https://github.com/HMS-IDAC/UnMicst.git
+cd UnMicst
+conda env create -f conda.yml
+conda activate unmicst
+python UnMicst.py /path/to/input.tif --outputPath /path/to/results/directory
+References:
+S Saka, Y Wang, J Kishi, A Zhu, Y Zeng, W Xie, K Kirli, C Yapp, M Cicconet, BJ Beliveau, SW Lapan, S Yin, M Lin, E Boyde, PS Kaeser, G Pihan, GM Church, P Yin, Highly multiplexed in situ protein imaging with signal amplification by Immuno-SABER, Nat Biotechnology (accepted)
+
+OHSU Wrapper Repo: https://github.com/ohsu-comp-bio/UnMicst
+    ]]></help>
+    <expand macro="citations" />
+</tool>