TrakEM2 | news | snapshots | manual | how to | tutorials | scripting

A Fiji Scripting Tutorial

Most of what you want to do with an image exists in Fiji.
What happens is: you still don't know what it's called, and where it is.

This tutorial will provide you with the general idea of how Fiji works: how are its capabilities organized, and how can they be composed into a program.

To learn about Fiji, we'll start the hard way: by programming.
Your first program will be very simple: obtain an image, and print out its title. We'll slowly iterate towards increasingly complex programs.

This tutorial will teach you both python and Fiji.


Index
  1. Getting started
  2. Your first Fiji script
  3. Inspecting properties and pixels of an image
  4. Running ImageJ / Fiji plugins on an ImagePlus
  5. Creating images and regions of interest (ROIs)
  6. Create and manipulate image stacks
  7. Interacting with humans: file and option dialogs, messages, progress bars.
  8. Turn your script into a plugin
  9. Lists, native arrays, and passing lists and arrays to Java classes
  10. Generic algorithms that work on images of any kind: using Imglib
  11. ImgLib2: writing generic, high-performance image processing programs

Tutorial created by Albert Cardona. Zurich, 2010-11-10.
(Last update: 2018-10-17)

All source code is under the Public Domain.

Remember: Fiji is just ImageJ (batteries included).


See also:

Thanks to:

  • 2018-07-22: Nikolas Schnellbächer for reporting an error in a script.
  • 2018-10-17: Tobias Pietzsch for identifying an error in the Memoize class (namely lack of synchronization), leading to both incorrectness of the cache and performance issues.

1. Getting started

Open the Script Editor by choosing "File - New - Script".

Alternatively, use the Command finder:

Push 'l' (letter L) and then start typing "scri".
You will see a list of Fiji commands, getting shorter the more letters you type. When the "Script Editor" command is visible, push the up arrow to go to it, and then push return to launch it.
(Or double-click on it.)

The Command Finder is useful for invoking any Fiji command.

2. Your first Fiji script

We'll need an image to work on: please open any image.
For example, go to "File - Open Samples - Boats (356K)".

This tutorial will use the programming language Python 2.5. We start by telling the opened Script Editor what language you want to write the script on: choose "Language - Python".

Grabbing an open image

Type in what you see on the image to the right into the Script Editor, and then push "Run", or choose "Run - Run", or control+R (command+R in MacOSX).
The program will execute and print, at the bottom, its result.

Line by line:

  1. Import the namespace "IJ" from the package "ij".
    A namespace is a group of functions. And a package is a group of namespaces.
    Just imagine: if all functions were in the same namespace, it would be huge, and you wouldn't be able to have repeated names. Organizing functions in small namespaces is a great idea.
  2. (An empty line)
  3. Assign the result of invoking the function "getImage" from the namespace "IJ" to the local variable "imp".
    So now "imp" points to the last image you opened, or whose window was brought to focus by a mouse click. In our example, it's the "boats" image.
  4. Print the contents of the variable "imp".
    Notice how, at the bottom, the script first printed its own title "New_.py" and the starting time, and then printed "imp[boats.gif 720x576x1]"--which is just some data on the boats image: the title "boats.gif" and the dimensions of the image, in pixels.

So what is "imp"? "imp" is a commonly used name to refer to an instance of an ImagePlus. The ImagePlus is one of ImageJ's abstractions to represent an image. Every image that you open in ImageJ is an instance of ImagePlus.


Saving an image with a file dialog

The first action we'll do on our image is to save it.

To do that, you could call "File - Save" from the menus.
In our program, we import the namespace "FileSaver" and then create a new instance of FileSaver with our image "imp" as the only parameter. Then we invoke the function "save" on it, which will open a file dialog. After choosing a name and a folder, the image will be saved in TIFF format.

Saving an image directly to a file

The point of running a script is to avoid human interaction.
We want to save an image automatically: we tell the FileSaver instance where it should save our image, and in what format (like TIFF with saveAsTiff). The FileSaver offers more methods, such as saveAsPng, saveAsJpeg, etc.

Notice that the '#' sign defines comments. In python, any text after a '#' is not executed.

from ij import IJ
from ij.io import FileSaver

imp = IJ.getImage()
fs = FileSaver(imp)

# A known folder to store the image at:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

filepath = folder + "/" + "boats.tif"
fs.saveAsTiff(filepath):
          

Saving an image ... checking first if it's a good idea.

The FileSaver will overwrite whatever file exists at the file path that you give it. That is not always a good idea!

Here, we write the same code but checking first:

  1. If the folder exists at all, and whether the file at that file path is really a folder.
  2. If a file with the same name as the file we are about to write is already there--to avoid overwriting, if desired.
  3. If the FileSaver.saveAsTiff call really worked, or failed.
    Notice in the documentation for FileSaver.saveAsTiff that this method returns a boolean variable: it will be true if all went well, and false if the image could not be saved in the file.

And finally, if all expected preconditions hold, then we place the call to saveAsTiff.


This script introduced three new programming items:

  • if, else, and elif ("elif" being a combination of "else" and "if")
  • The concept of a code block, which, in python, starts with a ':' and then the code lines are indented.
    Notice how the code below the if or the else are indented to the right. By how much, it doesn't matter, as long as it's consistent.
  • The os.path namespace, which contains utility functions for inspecting files and folders (also called "directories").
from ij import IJ
from ij.io import FileSaver
from os import path

imp = IJ.getImage()
fs = FileSaver(imp)

# A known folder to store the image at:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

# Test if the folder exists before attempting to save the image:
if path.exists(folder) and path.isdir(folder):
  print "folder exists:", folder
  filepath = path.join(folder, "boats.tif") # Operating System-specific
  if path.exists(filepath):
    print "File exists! Not saving the image, would overwrite a file!"
  elif fs.saveAsTiff(filepath):
    print "File saved successfully at ", filepath
else:
  print "Folder does not exist or it's not a folder!"
          

3. Inspecting properties and pixels of an image

An image in ImageJ or Fiji is, internally, an instance of ImagePlus.
The ImagePlus contains data such as the title and dimensions of the image (width, height, number of slices, number of time frames, number of channels), as well as the pixels, which are wrapped in an ImageProcessor instance.
Each of these data is stored internally in a field of the ImagePlus class. The field is nothing else than a variable, which, for a given image instance, points to a specific value.
For example, the "title" field points to "boats.gif" for the instance of ImagePlus that contains the sample boats image that we opened earlier.

In python, accessing fields of an instance is straightforward: just add a dot '.' between the variable "imp" and the field "title" to access.

In the Fiji API documentation, if you don't see a specific field like width in a particular class, but there is a getWidth method, then from python they are one and the same.

The image type

Notice how we created a dictionary to hold key/value pairs: of the image type versus a text representation of that type. This dictionary (also called map or table in other programming languages) then lets us ask it for a specific image type (such as ImagePlus.GRAY8), and we get back the corresponding text, such as "8-bit".

You may have realized by now that the ImagePlus.getType() (or what is the same in python: "imp.type") returns us any of the controled values of image type that an ImagePlus instance can take. These values are GRAY8, GRAY16, GRAY32, COLOR_RGB, and COLOR_256.

What is the image type? It's the kind of pixel data that the image holds. It could be numbers from 0 to 255 (what fits in an 8-bit range), or from 0 to 65536 (values that fit in a 16-bit range), or could be three channels of 8-bit values (an RGB image), or floating-point values (32-bit).

The COLOR_256 indicates an 8-bit image that has an associated look-up table: each pixel value does not represent an intensity, but rather it's associated with a color. The table of values versus colors is limited to 256, and hence these images may not look very well. For image processing, you should avoid COLOR_256 images (also known as "8-bit color" images). These images are meant for display in the web in ".gif" format, but have been superseeded by JPEG or PNG.

The GRAY_8 ("8-bit"), GRAY_16 ("16-bit") and GRAY_32 ("32-bit") images may also be associated with a look-up table. For example, in a "green" look-up table on an 8-bit image, values of zero are black, values of 128 are darkish green, and the maximum value of 255 is fully pure green.

from ij import IJ, ImagePlus

# Grab the last activated image
imp = IJ.getImage()

# Print image details
print "title:", imp.title
print "width:", imp.width
print "height:", imp.height
print "number of pixels:", imp.width * imp.height
print "number of slices:", imp.getNSlices()
print "number of channels:", imp.getNChannels()
print "number of time frames:", imp.getNFrames()

types = {ImagePlus.COLOR_RGB : "RGB",
         ImagePlus.GRAY8 : "8-bit",
         ImagePlus.GRAY16 : "16-bit",
         ImagePlus.GRAY32 : "32-bit",
         ImagePlus.COLOR_256 : "8-bit color"}

print "image type:", types[imp.type]
          
Started New_.py at Wed Nov 10 14:57:46 CET 2010
title: boats.gif
width: 720
height: 576
number of pixels: 414720
number of slices: 1
number of channels: 1
number of time frames: 1
image type: 8-bit
          

Obtaining pixel statistics of an image (and your first function)

ImageJ / Fiji offers an ImageStatistics class that does all the work for us.
The ImageStatistics class offers a convenient getStatistics static method. (A static method is a function, in this case of the ImageStatistics namespace, that is unrelated to a class instance. Java confuses namespaces with class names).

Notice how we import the ImageStatistics namespace as "IS", i.e. we alias it--it's too long to type!

The options variable is the bitwise-or combination of three different static fields of the ImageStatistics class. The final options is an integer that has specific bits set that indicate mean, median and min and max values.
(Remember that in a computer, an integer number is a set of bits, such as 00001001. In this example, we'd say that the first and the fourth bits are set. Interpreting this sequence of 0 and 1 in binary gives the integer number 4097 in decimal).

from ij import IJ
from ij.process import ImageStatistics as IS

# Grab the active image
imp = IJ.getImage()

# Get its ImageProcessor
ip = imp.getProcessor()

options = IS.MEAN | IS.MEDIAN | IS.MIN_MAX
stats = IS.getStatistics(ip, options, imp.getCalibration())

# print statistics on the image
print "Image statistics for", imp.title
print "Mean:", stats.mean
print "Median:", stats.median
print "Min and max:", stats.min, "-", stats.max
          
Started New_.py at Wed Nov 10 19:54:37 CET 2010
Image statistics for boats.gif
Mean: 120.026837384
Median: 138.0
Min and max: 3.0 - 220.0
          

Now, how about obtaining statistics for a lot of images?
From a list of images in a folder, we would have to:

  1. Load each image
  2. Get statistics for it

So we define a folder that contains our images, and we loop the list of filenames that it has. For every filename that ends with ".tif", we load it as an ImagePlus, and handle it to the getStatistics function, which returns us the mean, median, and min and max values.

(Note: if the images are stacks, use StackStatistics instead.)

This script introduces a few new concepts:

  • Defining a function: it's done with def, followed by the desired function name, and any number of comma-separated arguments between parenthesis. The function is a code block--remember the code block is specified with indentation (any amount of indentation, as long as it's consistent).
  • The triple quote """ : defines a string of text over multiple lines. It's also the convention for adding documentation to a function in python.
  • The global keyword: lets you read, from within a function code block, a variable defined outside of the function code block.
  • The for loop: to iterate every element of a list. In this case, every filename in the list of filenames of a folder, which we obtain from the os.listdir function.
    Notice the continue keyword, used to jump to the next loop iteration when desired. In the example, when the image couldn't be loaded.

See also the python documentation page on control flow, with explanations on the keywords if, else and elif, the for loop keyword and the break and continue keywords, defining a function with def, functions with variable number of arguments, anonymous functions (with the keyword lambda), and guidelines on coding style.


 
from ij import IJ
from ij.process import ImageStatistics as IS
import os

options = IS.MEAN | IS.MEDIAN | IS.MIN_MAX

def getStatistics(imp):
  """ Return statistics for the given ImagePlus """
  global options
  ip = imp.getProcessor()
  stats = IS.getStatistics(ip, options, imp.getCalibration())
  return stats.mean, stats.median, stats.min, stats.max


# Folder to read all images from:
folder = "/home/albert/Desktop/t2/fiji-tutorial"

# Get statistics for each image in the folder
# whose file extension is ".tif":
for filename in os.listdir(folder):
  if filename.endswith(".tif"):
    print "Processing", filename
    imp = IJ.openImage(os.path.join(folder, filename))
    if imp is None:
      print "Could not open image from file:", filename
      continue
    mean, median, min, max = getStatistics(imp)
    print "Image statistics for", imp.title
    print "Mean:", mean
    print "Median:", median
    print "Min and max:", min, "-", max
  else:
    print "Ignoring", filename
          

Iterating pixels

Iterating pixels is considered a low-level operation that you would seldom, if ever, have to do. But just so you can do it when you need to, here are various ways to iterate all pixels in an image.

The three iteration methods:

  1. The C-style method, where we iterate over a list of numbers from zero to length of the pixel array minus one, and obtain each pixel by doing an array lookup.
    The list of numbers is obtained by calling the built-in function xrange, which delivers a lazy sequence of 0, 1, 2, ... up to the length of the pixel array minus one.
    The length of the pixels array is obtained by calling the built-in function len.
  2. The iterator method, where the pixels array is iterated as if it was a list, and the pix variable takes the value of each pixel.
  3. The functional method, were instead of looping, we reduce the array to a single value (the minimum) by applying the min function to every adjacent pair of pixel values in the pixels array. (Realize that any function that takes two arguments, like min, could have been used with reduce.)

The last should be your preferred method. There's the least opportunity for introducting an error, and it is very concise.


Regarding the example given, keep in mind:

  • That the pixels variable points to an array of pixels, which can be any of byte[], short[], float[], or int[] (for RGB images, with the 3 color channels channels bit-packed).
  • That the example method for finding out the minimum value would NOT work for RGB images, because they have the three 8-bit color channels packed into a single integer value.
    For an RGB image, you'd want to ask which pixel is the least bright. It's easy to do so by calling getBrightness() on the ImageProcessor of an RGB image (which is a ColorProcessor). Or compute the minimum for one of its color channels, which you get with the method ip.toFloat(0, None) to get the red channel (1 is green, and 2 is blue).

  •  
from ij import IJ

# Grab the active image
imp = IJ.getImage()

# Grab the image processor converted to float values
# to avoid problems with bytes
ip = imp.getProcessor().convertToFloat() # as a copy
# The pixels points to an array of floats
pixels = ip.getPixels()

print "Image is", imp.title, "of type", imp.type

# Obtain the minimum pixel value

# Method 1: the for loop, C style
minimum = Float.MAX_VALUE
for i in xrange(len(pixels)):
  if pixels[i] < minimum:
    minimum = pixels[i]

print "1. Minimum is:", minimum

# Method 2: iterate pixels as a list
minimum = Float.MAX_VALUE
for pix in pixels:
  if pix < minimum:
    minimum = pix

print "2. Minimum is:", minimum

# Method 3: apply the built-in min function
# to the first pair of pixels,
# and then to the result of that and the next pixel, etc.
minimum = reduce(min, pixels)

print "3. Minimum is:", minimum
         
Started New_.py at Wed Nov 10 20:49:31 CET 2010
Image is boats.gif of type 0
1. Minimum is: 3.0
2. Minimum is: 3.0
3. Minimum is: 3.0
          

On iterating or looping lists or collections of elements

Ultimately all operations that involve iterating a list or a collection of elements can be done with the for looping construct. But in almost all occasions the for is not the best choice, neither regarding performance nor in clarity or conciseness. The latter is important to minimize the amount of errors that we may possibly introduce without noticing.

There are three kinds of operations to perform on lists or collections: map, reduce, and filter. We show them here along with the equivalent for loop.


 

A map operation takes a list of length N and returns another list also of length N, with the results of applying a function (that takes a single argument) to every element of the original list.

For example, suppose you want to get a list of all images open in Fiji.

With the for loop, we have to create first a list explictly and then append one by one every image.

With list comprehension, the list is created directly and the logic of what goes in it is placed inside the square brackets--but it is still a loop. That is, it is still a sequential, unparallelizable operation.

With the map, we obtain the list automatically by executing the function WM.getImage to every ID in the list of IDs.

While this is a trivial example, suppose you were executing a complex operation on every element of a list or an array. If you were to redefine the map function to work in parallel, suddenly any map operation in your program will run faster, without you having to modify a single line of tested code!


 
from ij import WindowManager as WM
            
# Method 1: with a 'for' loop
images = []
for id in WM.getIDList():
  images.append(WM.getImage(id))

# Method 2: with list comprehension
images = [WM.getImage(id) for id in WM.getIDList()]

# Method 3: with a 'map' operation
images = map(WM.getImage, WM.getIDList())

          

A filter operation takes a list of length N and returns a shorter list, with anywhere from 0 to N elements. Only those elements of the original list that pass a test are placed in the new, returned list.

For example, suppose you want to find the subset of opened images in Fiji whose title match a specific criterium.

With the for loop, we have to create a new list first, and then append elements to that list as we iterate the list of images.

The second variant of the for loop uses list comprehension. The code is reduced to a single short line, which is readable, but is still a python loop (with potentially lower performance).

With the filter operation, we get the (potentially) shorter list automatically. The code is a single short line, instead of 4 lines!


 
from ij import WindowManager as WM

# A list of all open images
imps = map(WM.getImage, WM.getIDList())

def match(imp):
  """ Returns true if the image title contains the word 'cochlea'"""
  return imp.title.find("cochlea") > -1

# Method 1: with a 'for' loop
# (We have to explicitly create a new list)
matching = []
for imp in imps:
  if match(imp):
    matching.append(imp)

# Method 2: with list comprehension
matching = [imp for imp in imps if match(imp)]

# Method 3: with a 'filter' operation
matching = filter(match, imps)
          

A reduce operation takes a list of length N and returns a single value. This value is composed from applying a function that takes two arguments to the first two elements of the list, then to the result of that and the next element, etc. Optionally an initial value may be provided, so that the cycle starts with that value and the first element of the list.

For example, suppose you want to find the largest image, by area, from the list of all opened images in Fiji.

With the for loop, we have to we have to keep track of which was the largest area in a pair of temporary variables. And even check whether the stored largest image is null! We could have initizalized the largestArea variable to the first element of the list, and then start looping at the second element by slicing the first element off the list (with "for imp in imps[1:]:"), but then we would have had to check if the list contains at least one element.

With the reduce operation, we don't need any temporary variables. All we need is to define a helper function (which could have been an anonymous lambda function, but we defined it explicitly for extra clarity and reusability).


 
from ij import IJ

from ij import WindowManager as WM

# A list of all open images
imps = map(WM.getImage, WM.getIDList())

def area(imp):
  return imp.width * imp.height

# Method 1: with a 'for' loop
largest = None
largestArea = 0
for imp in imps:
  if largest is None:
    largest = imp
  else:
    a = area(imp)
    if a > largestArea:
      largest = imp
      largestArea = a

# Method 2: with a 'reduce' operation
def largestImage(imp1, imp2):
  return imp1 if area(imp1) > area(imp2) else imp2

largest = reduce(largestImage, imps)
          

Subtract the min value to every pixel

First we obtain the minimum pixel value, using the reduce method explained just above.

Then we subtract this minimum value to every pixel. We have two ways to do it:

  1. In place, by iterating the pixel array C-style and setting a new value to each pixel: that of itself minus the minimum value.
  2. On a new list: we declare an anonymous function (with lambda instead of def) that takes one argument x (the pixel value), subtracts the minimum from it, and returns the result. We map (in other words, we apply) this function to every pixel in the pixels array, returning a new list of pixels with the results.

With the first method, since the pixels array was already a copy (notice we called convertToFloat() on the ImageProcessor), we can use it to create a new ImagePlus with it without any unintended consequences.

With the second method, the new list of pixels must be given to a new FloatProcessor instance, and with it, a new ImagePlus is created, of the same dimensions as the original.

from ij import IJ

imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat() # as a copy
pixels = ip.getPixels()

# Apply the built-in min function
# to the first pair of pixels,
# and then to the result of that and the next pixel, etc.
minimum = reduce(min, pixels)

# Method 1: subtract the minim from every pixel,
# in place, modifying the pixels array
for i in xrange(len(pixels)):
  pixels[i] -= minimum
# ... and create a new image:
imp2 = ImagePlus(imp.title, ip)

# Method 2: subtract the minimum from every pixel
# and store the result in a new array
pixels3 = map(lambda x: x - minimum, pixels)
# ... and create a new image:
ip3 = FloatProcessor(ip.width, ip.height, pixels3, None)
imp3 = ImagePlus(imp.title, ip3)

# Show the images in an ImageWindow:
imp2.show()
imp3.show()
          

Reduce a pixel array to a single value: count pixels above a threshold

Suppose you want to analyze a subset of pixels. For example, find out how many pixels have a value over a certain threshold.

The reduce built-in function is made just for that. It takes a function with two arguments (the running count and the next pixel); the list or array of pixels; and an initial value (in this case, zero) for the first argument (the "count'), and will return a single value (the total count).

In this example, we computed first the mean pixel intensity, and then filtered all pixels for those whose value is above the mean. Notice that we compute the mean by using the convenient built-in function sum, which is able to add all numbers contained in any kind of collection (be it a list, a native array, a set of unique elements, or the keys of a dictionary). We could imitate the built-in sum function with reduce(lambda s, x: s + x, pixels), but paying a price in performance.

Notice we are using anonymous functions again (that is, functions that lack a name), declared in place with the lambda keyword. A function defined with def would do just fine as well.


from ij import IJ

# Grab currently active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat()
pixels = ip.getPixels()

# Compute the mean value (sum of all divided by number of pixels)
mean = sum(pixels) / len(pixels)

# Count the number of pixels above the mean
n_pix_above = reduce(lambda count, a: count + 1 if a > mean else count, pixels, 0)

print "Mean value", mean
print "% pixels above mean:", n_pix_above / float(len(pixels)) * 100
          
Started New_.py at Thu Nov 11 01:50:49 CET 2010
Mean value 120.233899981
% pixels above mean: 66.4093846451
          

Another useful application of filtering pixels by their value: finding the coordinates of all pixels above a certain value (in this case, the mean), and then calculating their center of mass.

The filter built-in function is made just for that. The indices of the pixels whose value is above the mean are collected in a list named "above", which is created by filtering the indices of all pixels (provided by the built-in function xrange). The filtering is done by an anonymous function declared with lambda, with a single argument: the index i of the pixel.

Here, note that in ImageJ, the pixels of an image are stored in a linear array. The length of the array is width * height, and the pixels are stored as concatenated rows. Therefore, the modulus of dividing the index of a pixel by the width the image provides the X coordinate of a pixel. Similarly, the integer division of the index of a pixel by the width provides the Y coordinate.

To compute the center of mass, there are two equivalent methods. The C-style method with a for loop, with every variable being declared prior to the loop, and then modified at each loop iteration and, after the loop, dividing the sum of coordinates by the number of coordinates (the length of the "above" list). For this example, this is the method with the best performance.

The second method computes the X and Y coordinates of the center of mass with a single line of code for each. Notice that both lines are nearly identical, differing only in the body of the function mapped to the "above" list containing the indices of the pixels whose value is above the mean. While, in this case, the method is less performant due to repeated iteration of the list "above", the code is shorter, easier to read, and with far less opportunities for introducing errors. If the actual computation was far more expensive than the simple calculation of the coordinates of a pixel given its index in the array of pixels, this method would pay off for its clarity.


from ij import IJ

# Grab the currently active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat()
pixels = ip.getPixels()

# Compute the mean value
mean = sum(pixels) / len(pixels)

# Obtain the list of indices of pixels whose value is above the mean
above = filter(lambda i: pixels[i] > mean, xrange(len(pixels)))

print "Number of pixels above mean value:", len(above)

# Measure the center of mass of all pixels above the mean

# The width of the image, necessary for computing the x,y coordinate of each pixel
width = imp.width

# Method 1: with a for loop
xc = 0
yc = 0
for i in above:
  xc += i % width # the X coordinate of pixel at index i
  yc += i / width # the Y coordinate of pixel at index i
xc = xc / len(above)
yc = yc / len(above)
print xc, yc

# Method 2: with sum and map
xc = sum(map(lambda i: i % width, above)) / len(above)
yc = sum(map(lambda i: i / width, above)) / len(above)
print xc, yc
          

The third method pushes the functional approach too far. While written in a single line, that doesn't mean it is clearer to read: it's intent is obfuscated by starting from the end: the list comprehension starts off by stating that each element (there are only two) of the list resulting from the reduce has to be divided by the length of the list of pixels "above", and only then we learn than the collection being iterated is the array of two coordinates, created at every iteration over the list "above", containing the sum of all coordinates for X and for Y. Notice that the reduce is invoked with three arguments, the third one being the list [0, 0] containing the initialization values of the sums. Confusing! Avoid writing code like this. Notice as well that, by creating a new list at every iteration step, this method is the least performant of all.

The fourth method is a clean up of the third method. Notice that we import the partial function from the functools package. With it, we are able to create a version of the "accum" helper function that has a frozen "width" argument (also known as currying a function). In this way, the "accum" function is seen by the reduce as a two-argument function (which is what reduce needs here). While we regain the performance of the for loop, notice that now the code is just as long as with the for loop. The purpose of writing this example is to illustrate how one can write python code that doesn't use temporary variables, these generally being potential points of error in a computer program. It is always better to write lots of small functions that are easy to read, easy to test, free of side effects, documented, and that then can be used to assemble our program.

# (Continues from above...)

# Method 3: iterating the list "above" just once
xc, yc = [d / len(above) for d in
            reduce(lambda c, i: [c[0] + i % width, c[1] + i / width], above, [0, 0])]
print xc, yc

# Method 4: iterating the list "above" just once, more clearly and performant
from functools import partial

def accum(width, c, i):
  """ Accumulate the sum of the X,Y coordinates of index i in the list c."""
  c[0] += i % width
  c[1] += i / width
  return c

xy, yc = [d / len(above) for d in reduce(partial(accum, width), above, [0, 0])]
print xc, yc
          


4. Running ImageJ / Fiji plugins on an ImagePlus

Here is an example plugin run programmatically: a median filter applied to the currently active image.

The median filter, along with the mean, minimum, maximum, variance, remove outliers and despeckle menu commands, are implemented in the RankFilters class.
A new instance of RankFilters is created (notice the "()" after "RankFilters"), and we call its method rank with the ImageProcessor, the radius, and the desired filter flag as arguments.
With the result, we create a new ImagePlus and we show it.

from ij import IJ
from ij.plugin.filter import RankFilters

# Grab the active image
imp = IJ.getImage()
ip = imp.getProcessor().convertToFloat() # as a copy

# Remove noise by running a median filter
# with a radius of 2
radius = 2
RankFilters().rank(ip, radius, RankFilters.MEDIAN)

imp2 = ImagePlus(imp.title + " median filtered", ip)
imp2.show()
          

Finding the class that implements a specific ImageJ command

When starting ImageJ/Fiji programming, the problem is not so much how to run a plugin on an image, as it is to find out which class implements which plugin. Here is a simple method to find out, via the Command Finder:

  1. Open the Command Finder by pushing 'l' or going to "Plugins - Utilities - Find commands...".
  2. Type "FFT". A bunch of FFT-related commands are listed.
  3. Click on the "Show full information" checkbox at the bottom.
  4. Read, next to each listed command, the plugin class that implements it.

Notice that the plugin class comes with some text. For example:

FFT (in Process > FFT) [ij.plugin.FFT("fft")]
Inverse FFT (in Process > FFT) [ij.plugin.FFT("inverse")]
          

The above two commands are implemented by a single plugin (ij.plugin.FFT) whose run method accepts, like all PlugIn, a text string specifying the action: the fft, or the inverse.
The first part of the information shows where in the menus you will find the command. In this case, under menu "Process", submenu "FFT".


Finding the java documentation for any class

Once you have found the PlugIn class that implements a specific command, you may want to use that class directly. The information is either in the online java documentation or in the source code. How to find these?

  • The Fiji java documentation can be opened directly from the Script Editor for a specific class. Type in the name of the class, select it, and then execute the menu "Tools - Open help for class (with frames)". A new web browser window will open, with the web page corresponding to the class in question. When there is more than one possible class (because they share the same name but live in different packages), then a dialog will prompt for choosing the correct one.
  • The source code for a plugin included in Fiji is in the Fiji git repository. The fastest way to find the corresponding java class is to Google it. Of course another way to search is directly in the Fiji source code repository, which has a search box to look up the source code of a plugin by its name their own repositories. The ImageJ source code is perhaps the easiest to browse, but contains only the core ImageJ library source code.

Figuring out what parameters a plugin requires

To do that, we'll use the Macro Recorder. Make sure that an image is open. Then:

  1. Open the "Plugins - Macros - Record..."
  2. Run the command of your choice, such as "Process - Filters - Median..."
    A dialog opens. Set the desired radius, and push "OK".
  3. Look into the Recorder window:
    run("Median...", "radius=2");
                

That is valid macro code, that ImageJ can execute. The first part is the command ("Median..."), the second part is the parameters that that command uses; in this case, just one ("radius=2"). If there were more parameters, they would be separated by spaces.


Running a command on an image

We can use these macro recordings to create jython code that executes a given plugin on a given image. Here is an example.

Very simple! The IJ namespace has a function, run, that accepts an ImagePlus as first argument, then the name of the command to run, and then the macro-ready list of arguments that the command requires.
When executing this script, no dialogs are shown!
Behind the curtains, ImageJ is placing the right parameters in the right places, making it all just work.


from ij import IJ

# Grab the active image
imp = IJ.getImage()

# Run the median filter on it, with a radius of 2
IJ.run(imp, "Median...", "radius=2")
          

5. Creating images and regions of interest (ROIs)

Create an image from scratch

An ImageJ/Fiji image is composed of at least three objects:

  • The pixels array: an array of primitive values.
    (Where primitive is one of byte, short, int, or float.)
  • The ImageProcessor subclass instance that holds the pixels array.
  • The ImagePlus instance that holds the ImageProcessor instance.

In the example, we create an empty array of floats (see creating native arrays), and fill it in with random float values. Then we give it to a FloatProcessor instance, which is then wrapped by an ImagePlus instance. Voilà!

from ij import ImagePlus
from ij.process import FloatProcessor
from array import zeros
from random import random

width = 1024
height = 1024
pixels = zeros('f', width * height)

for i in xrange(len(pixels)):
  pixels[i] = random()

fp = FloatProcessor(width, height, pixels, None)
imp = ImagePlus("White noise", fp)

imp.show()
          

Fill a region of interest (ROI) with a given value

To fill a region of interest in an image, we could iterate the pixels, find the pixels that lay within the bounds of interest, and set their values to a specified value. But that tedious and error prone. Much more effective is to create an instance of a Roi class or one of its subclasses (PolygonRoi, OvalRoi, ShapeRoi, etc.) and tell the ImageProcessor to fill that region.

In this example, we create an image filled with white noise like before, and then define a rectangular region of interest in it, which is filled with a value of 2.0.

The white noise is drawn from a random distribution whose values range from 0 to 1. When filling an area of the FloatProcessor with a value of 2.0, that is the new maximum value. The area with 2.0 pixel values will look white (look at the status bar):


from ij import IJ, ImagePlus
from ij.process import FloatProcessor
from array import zeros
from random import random
from ij.gui import Roi, PolygonRoi

# Create a new ImagePlus filled with noise
width = 1024
height = 1024
pixels = zeros('f', width * height)

for i in xrange(len(pixels)):
  pixels[i] = random()

fp = FloatProcessor(width, height, pixels, None)
imp = ImagePlus("Random", fp)

# Fill a rectangular region of interest
# with a value of 2:
roi = Roi(400, 200, 400, 300)
fp.setRoi(roi)
fp.setValue(2.0)
fp.fill()

# Fill a polygonal region of interest
# with a value of -3
xs = [234, 174, 162, 102, 120, 123, 153, 177, 171,
      60, 0, 18, 63, 132, 84, 129, 69, 174, 150,
      183, 207, 198, 303, 231, 258, 234, 276, 327,
      378, 312, 228, 225, 246, 282, 261, 252]
ys = [48, 0, 60, 18, 78, 156, 201, 213, 270, 279,
      336, 405, 345, 348, 483, 615, 654, 639, 495,
      444, 480, 648, 651, 609, 456, 327, 330, 432,
      408, 273, 273, 204, 189, 126, 57, 6]
proi = PolygonRoi(xs, ys, len(xs), Roi.POLYGON)
fp.setRoi(proi)
fp.setValue(-3)
fp.fill(proi.getMask())  # Attention!

imp.show()
          

6. Create and manipulate image stacks and hyperstacks

Load a color image stack and extract its green channel

First we load the stack from the web--it's the "Fly Brain" sample image.

Then we iterate its slices. Each slice is a ColorProcessor: wraps an integer array. Each integer is represented by 4 bytes, and the lower 3 bytes represent, respectively, the intensity values for red, green and blue. The upper most byte is usually reserved for alpha (the inverse of transparency), but ImageJ ignores it.

Dealing with low-level details like that is not necessary. The ColorProcessor has a method, toFloat, that can give us a FloatProcessor for a specific color channel. Red is 0, green is 1, and blue is 2.

Representing the color channel in floats is most convenient for further processing of the pixel values--won't overflow like a byte would.

In this example, all we do is collect each slice into a list of slices we named greens. Then we add all the slices to a new ImageStack, and pass it to a new ImagePlus. Then we invoke the "Green" command on that ImagePlus instance, so that a linear green look-up table is assigned to it. And we show it.

from ij import IJ, ImagePlus, ImageStack

# Load a stack of images: a fly brain, in RGB
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")
stack = imp.getImageStack()

print "number of slices:", imp.getNSlices()

# A list of green slices
greens = []

# Iterate each slice in the stack
for i in xrange(1, imp.getNSlices()+1):
  # Get the ColorProcessor slice at index i
  cp = stack.getProcessor(i)
  # Get its green channel as a FloatProcessor
  fp = cp.toFloat(1, None)
  # ... and store it in a list
  greens.append(fp)

# Create a new stack with only the green channel
stack2 = ImageStack(imp.width, imp.height)
for fp in greens:
  stack2.addSlice(None, fp)

# Create a new image with the stack of green channel slices
imp2 = ImagePlus("Green channel", stack2)
# Set a green look-up table:
IJ.run(imp2, "Green", "")
imp2.show()
          

Convert an RGB stack to a 2-channel, 32-bit hyperstack

We load an RGB stack--the "Fly brain" sample image, as before.

Suppose we want to analyze each color channel independently: an RGB image doesn't really let us, without lots of low-level work to disentangle the different color values from each pixel value. So we convert the RGB stack to a hyperstack with two separate channels, where each channel slice is a 32-bit FloatProcessor.

The first step is to create a new ImageStack instance, to hold all the slices that we'll need: one per color channel, times the number of slices.
We ignore the blue channel (which is empty in the "Fly brain" image), so we end up creating twice as many slices as we had in the RGB stack.

Realize that we could have 7 channels if we wanted, or 20, for each slice. As many as you want.

The final step is to open the hyperstack. For that:

  1. We assign the new stack2 to a new ImagePlus, imp2.
  2. We set the same calibration (microns per pixel) that the original image has.
  3. We tell it how to interpret its image stack: as having two channels, the same amount of Z slices as before, and just 1 time frame.
  4. We pass the imp2 to a new CompositeImage, comp, indicating how we want it displayed: assign a color to each channel. (With CompositeImage.COMPOSITE, all channels would be merged for display.)
  5. We show the comp, which will open a stack window with two slides: one for the channels, and one for the Z slices.

Open the "Image - Color - Channels Tool" and you'll see that the Composite image is set to show only the red channel--try checking the second channel as well.


For a real-world example of a python script that uses hyperstacks, see the Correct_3D_drift.py script (available as the command "Plugins - Registration - Correct 3D drift").
The script takes an opened, virtual hyperstack as input, and registers in 3D every time frame to the previous one, using phase correlation, correcting any translations on the X,Y,Z axis. The script is useful for correcting sample drift under the microscope in long 4D time series.


from ij import IJ, ImagePlus, ImageStack, CompositeImage

# Load a stack of images: a fly brain, in RGB
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")
stack = imp.getImageStack()

# A new stack to hold the data of the hyperstack
stack2 = ImageStack(imp.width, imp.height)

# Convert each color slice in the stack
# to two 32-bit FloatProcessor slices
for i in xrange(1, imp.getNSlices()+1):
  # Get the ColorProcessor slice at index i
  cp = stack.getProcessor(i)
  # Extract the red and green channels as FloatProcessor
  red = cp.toFloat(0, None)
  green = cp.toFloat(1, None)
  # Add both to the new stack
  stack2.addSlice(None, red)
  stack2.addSlice(None, green)

# Create a new ImagePlus with the new stack
imp2 = ImagePlus("32-bit 2-channel composite", stack2)
imp2.setCalibration(imp.getCalibration().copy())

# Tell the ImagePlus to represent the slices in its stack
# in hyperstack form, and open it as a CompositeImage:
nChannels = 2             # two color channels
nSlices = stack.getSize() # the number of slices of the original stack
nFrames = 1               # only one time point 
imp2.setDimensions(nChannels, nSlices, nFrames)
comp = CompositeImage(imp2, CompositeImage.COLOR)
comp.show()
          

7. Interacting with humans: file and option dialogs, messages, progress bars.

Ask the user for a directory

See DirectoryChooser.

from ij.io import DirectoryChooser

dc = DirectoryChooser("Choose a folder")
folder = dc.getDirectory()

if folder is None:
  print "User canceled the dialog!"
else:
  print "Selected folder:", folder
          

Ask the user for a file

See OpenDialog and SaveDialog.

from ij.io import OpenDialog

od = OpenDialog("Choose a file", None)
filename = od.getFileName()

if filename is None:
  print "User canceled the dialog!"
else:
  directory = od.getDirectory()
  filepath = directory + filename
  print "Selected file path:", filepath
          

Ask the user to enter a few parameters in a dialog

There are more possibilities, but these are the basics. See GenericDialog.

All plugins that use a GenericDialog are automatable. Remember how above we run a command on an image? When the names in the dialog fields match the names in the macro string, the dialog is fed in the values automatically. If a dialog field doesn't have a match, it takes the default value as defined in the dialog declaration.

If a plugin was using a dialog like the one we built here, we would run it automatically like this:

args = "name='first' alpha=0.5 output='32-bit' scale=80"

IJ.run(imp, "Some PlugIn", args)
          

Above, leaving out the word 'optimize' means that it will use the default value (True) for it.

from ij.gui import GenericDialog

def getOptions():
  gd = GenericDialog("Options")
  gd.addStringField("name", "Untitled")
  gd.addNumericField("alpha", 0.25, 2)  # show 2 decimals
  gd.addCheckbox("optimize", True)
  types = ["8-bit", "16-bit", "32-bit"]
  gd.addChoice("output as", types, types[2])
  gd.addSlider("scale", 1, 100, 100)
  gd.showDialog()
  #
  if gd.wasCanceled():
    print "User canceled dialog!"
    return
  # Read out the options
  name = gd.getNextString()
  alpha = gd.getNextNumber()
  optimize = gd.getNextBoolean()
  output = gd.getNextChoice()
  scale = gd.getNextNumber()
  return name, alpha, optimize, output, scale

options = getOptions()
if options is not None:
  name, alpha, optimize, output, scale = options
  print name, alpha, optimize, output, scale
          

Show a progress bar

Will show a progress bar in the Fiji window.

from ij import IJ

imp = IJ.getImage()
stack = imp.getImageStack()

for i in xrange(1, stack.getSize()+1):
  # Report progress
  IJ.showProgress(i, stack.getSize()+1)
  # Do some processing
  ip = stack.getProcessor(i)
  # ...

# Signal completion
IJ.showProgress(1)
          

8. Turn your script into a plugin

Save the script in Fiji's plugins folder or a subfolder, with:

  • An underscore "_" in the name.
  • ".py" file extension.
For example: "my_script.py"

Then run "Help - Update Menus", or restart Fiji. That's it!

The script will appear as a regular menu command under "Plugins", and you'll be able to run it from the Command Finder.

Where is the plugins folder?

  • In MacOSX, it's inside the Fiji application:
    1. Go to the "Applications" folder in the Finder.
    2. Right-click on the Fiji icon and select "Show package contents"
  • In Ubuntu and in Windows, it's inside the "Fiji.app" folder.

See also the Fiji wiki on Jython Scripting.


9. Lists, native arrays, and interacting with Java classes

Jython lists are passed as read-only arrays to Java classes

Calling java classes and methods for jython is seamless: on the surface, there isn't any difference with calling jython classes and methods. But there is a subtle difference when calling java methods that expect native arrays.

Jython will automatically present a jython list as a native array to the java method that expects it. But as read-only!

In this example, we create an AffineTransform that specifies a translation. Then we give it:

  • A 2D point defined as a list of 2 numbers: the list fails to be updated in place by the transform method of the affine.
  • A 2D point defined as a native float array of 2 numbers: the array is correctly updated in place.

The ability to pass jython lists as native arrays to java methods is extremely convenient, and we have used it in the example above to pass a list of strings to the GenericDialog addChoice method.

from java.awt.geom import AffineTransform
from array import array

# A 2D point
x = 10
y = 40

# A transform that does a translation
# of dx=45, dy=56
aff = AffineTransform(1, 0, 0, 1, 45, 56)

# Create a point as a list of x,y
p = [x, y]
aff.transform(p, 0, p, 0, 1)
print p
# prints: [10, 40] -- the list p was not updated!

# Create a point as a native float array of x,y
q = array('f', [x, y])
aff.transform(q, 0, q, 0, 1)
print q
# prints: [55, 96] -- the native array q was updated properly
          
Started New_.py at Sat Nov 13 09:31:51 CET 2010
[10, 40]
array('f', [55.0, 96.0])
          

Creating native arrays: empty, or from a list

The package array contains two functions:

  • zeros: to create empty native arrays.
  • array: to create an array out of a list, or out of another array of the same kind.

The type of array is specified by the first argument. For primitive types (char, short, int, float, long, double), use a single character in quotes. See the list of all possible characters.

Manipulating arrays is done in the same way that you would do in java. See lines 16--18 in the example.

See also the documentation on how to create multidimensional native arrays with Jython.

from array import array, zeros
from ij import ImagePlus

# An empty native float array of length 5
a = zeros('f', 5)
print a

# A native float array with values 0 to 9
b = array('f', [0, 1, 2, 3, 4])
print b

# An empty native ImagePlus array of length 5
imps = zeros(ImagePlus, 5)
print imps

# Assign the current image to the first element of the array
imps[0] = IJ.getImage()
print imps

# Length of an array
print "length:", len(imps)
          
Started New_.py at Sat Nov 13 09:40:00 CET 2010
array('f', [0.0, 0.0, 0.0, 0.0, 0.0])
array('f', [0.0, 1.0, 2.0, 3.0, 4.0])
array(ij.ImagePlus, [None, None, None, None, None])
array(ij.ImagePlus, [imp[boats.gif 720x576x1], None, None, None, None])
length: 5
          

10. Generic algorithms that work on images of any kind: using ImgLib

Imglib is a general-purpose software library for n-dimensional data processing, mostly oriented towards images. Scripting with Imglib greatly simplifies operations on images of different types (8-bit, 16-bit, color images, etc).

Scripting in imglib is based around the Compute function, which composes images, functions and numbers into output images.


Mathematical operations on images

The script.imglib packages offers means to compute with images. There are three kinds of operations, each in its own package:

  • script.imglib.math: offers functions that operate on each pixel. These functions are composable: the result of one function may be used as the input to another function.
    These math functions accept any possible pair of: images, numbers, and other functions.
  • script.imglib.color: offers functions to create and manipulate color images, for example to extract specific color channels either in RGB or in HSB color space. The functions to extract channels or specific color spaces are composable with mathematical functions. For example, to subtract one color channel from another.
    These color functions are composable with math functions.
  • script.imglib.algorithm: offers functions such as Gauss, Scale3D, Affine3D, Resample, Downsample ... that alter many pixels in one pass--they are not pixel-wise operations. Some change the dimensions of an image.
    These algorithm functions all return images, or what is the same, they are the result images of applying the function to the input image.
  • script.imglib.analysis: offers functions to extract or measure images or functions that evaluate to images. For example, the DoGPeak, which finds intensity peaks in the image by difference of Gaussian, returns a list of the coordinates of the found peaks.
    These analysis functions are all collections of the results.

from script.imglib.math import Compute, Subtract
from script.imglib.color import Red, Green, Blue, RGBA
from script.imglib import ImgLib
from ij import IJ

# Open an RGB image stack
imp = IJ.openImage("https://imagej.nih.gov/ij/images/flybrain.zip")

# Wrap it as an Imglib image
img = ImgLib.wrap(imp)

# Example 1: subtract red from green channel
sub = Compute.inFloats(Subtract(Green(img), Red(img)))

ImgLib.wrap(sub).show()

# Example 2: subtract red from green channel, and compose a new RGBA image
rgb = RGBA(Red(img), Subtract(Green(img), Red(img)), Blue(img)).asImage()

ImgLib.wrap(rgb).show()
          

Using image math for flat-field correction

In the example, we start by opening an image from the sample image collection of ImageJ.
Then, since we are lacking a flatfield image, we simulate one. We could do it using a median filter with a very large radius, but that it's too expensive to compute just for this example. Instead, we scale down the image, apply a Gauss to the scaled down image, and then resample the result up to the original image dimensions.
Then we do the math for flat-field correction:

  1. Subtract the brighfield from the image. (The brighfield is an image taken in the same conditions as the data image, but without the specimen: just the dust and debris and uneven illumination of the microscope.)
  2. Subtract the darkfield from the image. (The darkfield could represent the thermal noise in the camera chip.)
  3. Divide 1 by 2.
  4. Multiply 3 by the mean intensity of the original image.

With imglib, all the above operations happen in a pixel-by-pixel basis, and are computed as fast or faster than if you had manually hand-coded every operation. And multithreaded!


from script.imglib.math import Compute, Divide, Multiply, Subtract
from script.imglib.algorithm import Gauss, Scale2D, Resample
from script.imglib import ImgLib
from ij import IJ

# 1. Open an image
img = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/bridge.gif"))

# 2. Simulate a brighfield from a Gauss with a large radius
# (First scale down by 4x, then gauss of radius=20, then scale up)
brightfield = Resample(Gauss(Scale2D(img, 0.25), 20), img.getDimensions())

# 3. Simulate a perfect darkfield
darkfield = 0

# 4. Compute the mean pixel intensity value of the image
mean = reduce(lambda s, t: s + t.get(), img, 0) / img.size()

# 5. Correct the illumination
corrected = Compute.inFloats(Multiply(Divide(Subtract(img, brightfield),
                                             Subtract(brightfield, darkfield)), mean))

# 6. ... and show it in ImageJ
ImgLib.wrap(corrected).show()
          

Extracting and manipulating image color channels: RGBA and HSB

In the examples above we have already used the Red and Green functions. There's also Blue, Alpha, and a generic Channel that takes the channel index as argument--where red is 3, green is 2, blue is 1, and alpha is 4 (these numbers are related to the byte order in the 4-byte that makes up a 32-bit integer). The basic color operations have to do with extracting the color channel, for a particular color space (RGBA or HSB)

The function RGBA takes from 1 to 4 arguments, and creates an RGBA image out of them. These arguments can be images, other functions, or numbers--for example, all pixels of a channel would have the value 255 (maximum intensity).
In the example, we create a new RGBA image that takes the Gaussian of the red channel, the value 40 for all pixels of the green channel, and the dithered image of the blue channel.
Notice that the Dither function returns 0 or 1 values for each pixel, hence we multiply them by 255 to make them full intensity of blue in the RGBA image.


from script.imglib.math import Compute, Subtract, Multiply
from script.imglib.color import Red, Blue, RGBA
from script.imglib.algorithm import Gauss, Dither
from ij import IJ

# Obtain a color image from the ImageJ samples  
clown = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/clown.jpg"))
  
# Example 1: compose a new image manipulating the color channels of the clown image:  
img = RGBA(Gauss(Red(clown), 10), 40, Multiply(255, Dither(Blue(clown)))).asImage()  
  
ImgLib.wrap(img).show()
          

In the second example, we extract the HSB channels from the clown image. To the Hue channel (which is expressed in the range [0, 1]), we add 0.5. We've shifted the hue around a bit.
To understand how the hue values work (by flooring the float value and subtracting that from it), see this page.

 


from script.imglib.math import Compute, Add, Subtract
from script.imglib.color import HSB, Hue, Saturation, Brightness
from script.imglib import ImgLib
from ij import IJ

# Obtain an image
img = ImgLib.wrap(IJ.openImage("https://imagej.nih.gov/ij/images/clown.jpg"))

# Obtain a new clown, whose hue has been shifted by half
# with the same saturation and brightness of the original
bluey = Compute.inRGBA(HSB(Add(Hue(img), 0.5), Saturation(img), Brightness(img)))

ImgLib.wrap(bluey).show()
          

In the third example, we apply a gamma correction to an RGB confocal stack. To correct the gamma, we must first extract each color channel from the image, and then apply the gamma to each channel independently. In this example we use a gamma of 0.5 for every channel. Of course you could apply different gamma values to each channel, or apply it only to specific channels.

Notice how we use asImage() instead of Compute.inRGBA. The result is the same; the former is syntactic sugar of the latter.

 


# Correct gamma
from script.imglib.math import Min, Max, Exp, Multiply, Divide, Log
from script.imglib.color import RGBA, Red, Green, Blue
from ij import IJ

gamma = 0.5
img = ImgLib.wrap(IJ.getImage())

def g(channel, gamma):
  """ Return a function that, when evaluated, computes the gamma
	    of the given color channel.
      If 'i' was the pixel value, then this function would do:
      double v = Math.exp(Math.log(i/255.0) * gamma) * 255.0);
      if (v < 0) v = 0;
      if (v >255) v = 255;
  """
  return Min(255, Max(0, Multiply(Exp(Multiply(gamma, Log(Divide(channel, 255)))), 255)))

corrected = RGBA(g(Red(img), gamma), g(Green(img), gamma), g(Blue(img), gamma)).asImage()

ImgLib.wrap(corrected).show()
          

Find cells in an 3D image stack by Difference of Gaussian, count them, and show them in 3D as spheres.

First we define the cell diameter that we are looking for (5 microns; measure it with a line ROI over the image) and the minimum voxel intensity that will care about (in this case, anything under a value of 40 will be ignored). And we load the image of interest: a 3-color channel image of the first instar Drosophila larval brain.

Then we scale down the image to make it isotropic: so that voxels have the same dimensions in all axes.

We run the DoGPeaks ("Difference of Gaussian Peaks") with a pair of appropriate sigmas: the scaled diameter of the cell, and half that.

The peaks are each a float[] array that specifies its coordinate. With these, we create Point3f instances, which we transport back to calibrated image coordinates.

Finally, we show in the 3D Viewer the peaks as spheres, and the image as a 3D volume.


 
# Load an image of the Drosophila larval fly brain and segment
# the 5-micron diameter cells present in the red channel.

from script.imglib.analysis import DoGPeaks
from script.imglib.color import Red
from script.imglib.algorithm import Scale2D
from script.imglib.math import Compute
from script.imglib import ImgLib
from ij3d import Image3DUniverse
from javax.vecmath import Color3f, Point3f
from ij import IJ

cell_diameter = 5  # in microns
minPeak = 40 # The minimum intensity for a peak to be considered so.
imp = IJ.openImage("http://samples.fiji.sc//samples/first-instar-brain.zip")

# Scale the X,Y axis down to isotropy with the Z axis
cal = imp.getCalibration()
scale2D = cal.pixelWidth / cal.pixelDepth
iso = Compute.inFloats(Scale2D(Red(ImgLib.wrap(imp)), scale2D))

# Find peaks by difference of Gaussian
sigma = (cell_diameter  / cal.pixelWidth) * scale2D
peaks = DoGPeaks(iso, sigma, sigma * 0.5, minPeak, 1)
print "Found", len(peaks), "peaks"

# Convert the peaks into points in calibrated image space
ps = []
for peak in peaks:
  p = Point3f(peak)
  p.scale(cal.pixelWidth * 1/scale2D)
  ps.append(p)

# Show the peaks as spheres in 3D, along with orthoslices:
univ = Image3DUniverse(512, 512)
univ.addIcospheres(ps, Color3f(1, 0, 0), 2, cell_diameter/2, "Cells").setLocked(True)
univ.addOrthoslice(imp).setLocked(True)
univ.show()
          

11. ImgLib2: writing generic, high-performance image processing programs

For a high-level introduction to ImgLib2, see:

Views of an image, with ImgLib2

ImgLib2 is a powerful library with a number of key concepts for high-performance, memory-efficient image processing. One such concept is that of a view of an image.

First, wrap a regular ImageJ ImagePlus into an ImgLib2 image, with the 'wrap' function in the ImageJFunctions namespace (AKA a static method), which we alias as IL for brevity using the as keyword in the import line.

Then, we view the image as an infinite image, using the Views.extendZero function: beyond the boundaries of the image, return the value zero as the pixel value.

An infinite image cannot be visualized in full. Therefore, we apply the Views.interval function to delimit it: in this example, to a "canvas" twice as large as before, with the image centered.

Then, we wrap the ImgLib2 interval imgL into an ImageJ's ImagePlus (using a modified VirtualStack that reads directly from the imgL), and show it.

Importantly, no pixel data was duplicated at any step. The Views concept enables us to define transformations to the image that are then concatenated and finally used to render the final image.

And furthermore, thanks to ImgLib2's underlying dimension-independent, data source-independent, and image type-independent model, this code applies to any image of any type and dimensions: images, volumes, 4D series. ImgLib2 is a very powerful library.

from ij import IJ
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.view import Views

# Load an image (of any dimensions) such as the clown sample image
imp = IJ.getImage()

# Convert to 8-bit if it isn't yet, using macros
IJ.run(imp, "8-bit", "")

# Access its pixel data from an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with a value of zero beyond the image edges
imgE = Views.extendZero(img)

# Limit the infinite image with an interval twice as large as the original,
# so that the original image remains at the center.
# It starts at minus half the image width, and ends at 1.5x the image width.
minC = [int(-0.5 * img.dimension(i)) for i in range(img.numDimensions())]
maxC = [int( 1.5 * img.dimension(i)) for i in range(img.numDimensions())]
imgL = Views.interval(imgE, minC, maxC)

# Visualize the enlarged canvas, so to speak
imp2 = IL.wrap(imgL, imp.getTitle() + " - enlarged canvas") # an ImagePlus
imp2.show()
          


There are multiple strategies for filling in the space beyond an image boundaries. Above, we used Views.extendZero, which trivally sets the "outside" to the pixel value zero. But there are several variants, including View.extendValue for arbitrary pixel values instead of zero; Views.extendMirrorSingle and Views.extendMirrorDouble for mirroring the pixel values relative to the nearest image border, and others. See Views for details and for more.

In this example, we use Views.extendMirrorSingle and the effect is clear when we take an interval over it just like the one above: instead of the image surrounded by black space, we get mirror copies in every direction beyond the edges of the original image, which remains centered.

The various extended views each have their purpose. Extending enables, for example, to avoid writing in special purpose code for e.g. algorithms that use a moving window around every pixel. The pixels on the border or near the border (depending on the size of the window) would need to be special cased. Instead, with extended views, you can specify what data should be present beyond the border (a constant value, a mirror reflection of the image), and reduce enormously the complexity of your code.

You could also use them like ROIs (regions of interest): obtain a View on a specific region of the image, and apply to it any code that runs on whole images. Views simplify programming for image processing a lot.


img = ... # See above

# View mirroring the data beyond the edges
imgE = Views.extendMirrorSingle(img)
imgL = Views.interval(imgE, minC, maxC)

# Visualize the enlarged canvas, so to speak
imp2 = IL.wrap(imgL, imp.getTitle() + " - enlarged canvas") # an ImagePlus
imp2.show()
					


Counting cells: difference of Gaussian peak detection with ImgLib2

First we load ImageJ's "embryos" example image, which is RGB, and convert it to 8-bit (16-bit or 32-bit would work just fine). Then we wrap it as an ImgLib2 image, and acquire a mirroring infinite view of the image which is suitable for computing Gaussians.

The parameters of ImgLib2's Difference of Gaussian detection (DogDetection) are relatively straightforward. The key parameters are the sigmaLarger and sigmaSmaller, which define the sigmas of the two Gaussians that will be subtracted one from the other. The minPeakValue acts as a filter for noisy detections. The calibration would be useful in e.g. an LSM 3D volume where the Z axis has typically a lower resolution than the X and Y axes.

For visual validation, we read out the detected peaks as a PointRoi that we set on the imp, the original ImagePlus with the embryos (see image below with a PointRoi point on each embryo).

Then, we set out to measure a small interval around each detected peak (each embryo). For this, we use the sigmaSmaller, which is half of the radius of an embryo (determined empirically by using a line ROI over embryos and pushing 'm' to measure them), so that we define a 2d box around the peak, with a side twice that of sigmaSmaller plus one. Ideally, one would use a circular ROI by using a HyperSphere, but a square ROI as obtained with a View.interval will more than suffice here.

  Picking and measuring areas with Views.interval

To sum the pixel intensity values within the interval, we use Views.flatIterable on the interval, which provides a view that can be serially iterated over the interval. Otherwise, the interval, which is a RandomAccessibleInterval, would yield its pixel values only if we gave it each pixel coordinate to be measured. Then, we iterate each small view, obtaining a t (a Type) instance for every pixel, which in ImgLib2 is one of the key design features that enables so much indirection without sacrificing performance. To the t Type, which is a subclass of NumericType, we ask it to yield an integer with t.getInteger(). Python's built-in sum function adds up all the values of the generator (no list is created).

  Listing measurements with a Results Table

Finally, the peak X,Y coordinates and the sum of pixel values within the interval are added to an ImageJ ResultsTable.

 


from ij import IJ
from ij.gui import PointRoi
from ij.measure import ResultsTable
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.view import Views
from net.imglib2.algorithm.dog import DogDetection
from jarray import zeros

# Load a greyscale single-channel image: the "Embryos" sample image
imp = IJ.openImage("https://imagej.nih.gov/ij/images/embryos.jpg")
# Convert it to 8-bit
IJ.run(imp, "8-bit", "")

# Access its pixel data from an ImgLib2 data structure: a RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, mirrored at the edges which is ideal for Gaussians
imgE = Views.extendMirrorSingle(img)

# Parameters for a Difference of Gaussian to detect embryo positions
calibration = [1.0 for i in range(img.numDimensions())] # no calibration: identity
sigmaSmaller = 15 # in pixels: a quarter of the radius of an embryo
sigmaLarger = 30  # pixels: half the radius of an embryo
extremaType = DogDetection.ExtremaType.MAXIMA
minPeakValue = 10
normalizedMinPeakValue = False

# In the differece of gaussian peak detection, the img acts as the interval
# within which to look for peaks. The processing is done on the infinite imgE.
dog = DogDetection(imgE, img, calibration, sigmaSmaller, sigmaLarger,
  extremaType, minPeakValue, normalizedMinPeakValue)

peaks = dog.getPeaks()

# Create a PointRoi from the DoG peaks, for visualization
roi = PointRoi(0, 0)
# A temporary array of integers, one per dimension the image has
p = zeros(img.numDimensions(), 'i')
# Load every peak as a point in the PointRoi
for peak in peaks:
  # Read peak coordinates into an array of integers
  peak.localize(p)
  roi.addPoint(imp, p[0], p[1])

imp.setRoi(roi)

# Now, iterate each peak, defining a small interval centered at each peak,
# and measure the sum of total pixel intensity,
# and display the results in an ImageJ ResultTable.
table = ResultsTable()

for peak in peaks:
  # Read peak coordinates into an array of integers
  peak.localize(p)
  # Define limits of the interval around the peak:
  # (sigmaSmaller is half the radius of the embryo)
  minC = [p[i] - sigmaSmaller for i in range(img.numDimensions())]
  maxC = [p[i] + sigmaSmaller for i in range(img.numDimensions())]
  # View the interval around the peak, as a flat iterable (like an array)
  fov = Views.interval(img, minC, maxC)
  # Compute sum of pixel intensity values of the interval
  # (The t is the Type that mediates access to the pixels, via its get* methods)
  s = sum(t.getInteger() for t in fov)
  # Add to results table
  table.incrementCounter()
  table.addValue("x", p[0])
  table.addValue("y", p[1])
  table.addValue("sum", s)

table.show("Embryo intensities at peaks")
					


  Generative image: simulating embryo segmentation

Here, I am showing how to express images whose underlying data is not the typical array of pixels, but rather, each pixel value is chosen as a function of the spatial coordinate. The underlying pixel data is just the function. In this example, a white pixel is returned when the pixel falls within a radius of the detected embryo, and a black pixel otherwise, for the background.

You may ask yourself what is the point of this simulated object segmentation. It is merely to illustrate how these function-based images can be created. Practical uses will come later. If you want a real segmentation of the area of these embryos, see Fiji/ImageJ's "Analyze - Analyze Particles...", or the machine-learning based Trainable WeKa Segmentation and SIOX simple interactive object extraction plugins.

First, we detect embryos using the Difference of Gaussian approach used above, with the DogDetection class. From this, we obtain the centers of all detected embryos, in floating-point coordinates.

Second, we define a value for the inside of the embryo (white), and another for the outside (black, the background).

Then we specify the radius that we want to paint with the inside value around the center coordinate of every detected embryo.

And crucially, we construct a KDTree, which is a data structure for fast spatial queries of coordinates. Here we use the kdtree to swiftly find, for every pixel in the final image, the nearest embryo center point.

Then, we define our "image". In quotes, because it is not an image. What we define is a method to obtain pixel values at arbitrary spatial coordinates, returning either inside (white) or outside (black) depending on the position in space for which we request a value. To this end, we define a new class Circles that is a RealRandomAccess, and, to avoid having to implement all the necessary methods of the RealRandomAccess interface, we extend the RealPoint class too, because it already implements pretty much everything we need except the critical get method from the Sampler interface. In other words, the only practical difference between a RealPoint and a RealRandomAccess is that the latter also implements the Sampler interface for requesting values.

The search is implemented using a NearestNeighborSearchOnKDTree, which does exactly what it says, and offers a stateful search.search. method: first we invoke the search (at the implicit current spatial coordinate of the RealRandomAccess parts of the Circles class), and then we ask for the distance from the current coordinate to the nearest one that was found in the search. On the basis of the result--comparing with the radius--either the inside or the outside is returned.

All that remains now is using the Circles RealRandomAccess as the data provider for a RealRandomAccessible that we name CircleData, which is still in real coordinates and unbounded. So we view it in a rasterized way, to be able to iterate it with integer coordinates--like the pixels of an image--, and define its bounds to be those of the original image img containing the embryos (that is, img here can be used because it implements Interval and happens to have exactly the dimensions we want). The "pixels" never exist in memory until they are written to the final image that is visualized. Voilà.

from ij import IJ
from net.imglib2.view import Views
from net.imglib2.algorithm.dog import DogDetection
from net.imglib2 import KDTree, RealPoint, RealRandomAccess, RealRandomAccessible
from net.imglib2.neighborsearch import NearestNeighborSearchOnKDTree
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.type.numeric.integer import UnsignedByteType
from java.util import AbstractList

# The difference of Gaussian calculation, same as above (compressed)
# (or, paste this script under the script above where 'dog' is defined)
imp = IJ.openImage("https://imagej.nih.gov/ij/images/embryos.jpg")
IJ.run(imp, "8-bit", "")
img = IL.wrapReal(imp)
imgE = Views.extendMirrorSingle(img)
dog = DogDetection(imgE, img, [1.0, 1.0], 15.0, 30.0,
  DogDetection.ExtremaType.MAXIMA, 10, False)

# The spatial coordinates for the centers of all detected embryos
centers = dog.getSubpixelPeaks() # in floating-point precision

# A value for the inside of an embryo: white, in 8-bit
inside = UnsignedByteType(255)
  
# A value for the outside (the background): black, in 8-bit
outside = UnsignedByteType(0)

# The radius of a simulated embryo, same as sigmaLarger was above
radius = 30 # or = sigmaLarger

# KDTree: a data structure for fast lookup of spatial coordinates
kdtree = KDTree([inside] * len(centers), centers)

# The definition of circles (or spheres, or hyperspheres) in space
class Circles(RealPoint, RealRandomAccess):
  def __init__(self, n_dimensions, kdtree, radius, inside, outside):
    super(RealPoint, self).__init__(n_dimensions)
    self.search = NearestNeighborSearchOnKDTree(kdtree)
    self.radius = radius
    self.radius_squared = radius * radius
    self.inside = inside
    self.outside = outside
  def copyRealRandomAccess(self):
    return Circles(self.numDimensions(), self.kdtree, self.radius,
		               self.inside, self.outside)
  def get(self):
    self.search.search(self)
    if self.search.getSquareDistance() < self.radius_squared:
      return self.inside 
    return self.outside

# The RealRandomAccessible that wraps the Circles in 2D space, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
class CircleData(RealRandomAccessible):
  def realRandomAccess(self):
    return Circles(2, kdtree, radius, inside, outside)
  def numDimensions(self):
    return 2

# An unbounded view of the Circles that can be iterated in a grid, with integers
raster = Views.raster(CircleData())

# A bounded view of the raster, within the bounds of the original 'img'
# I.e. 'img' here is used as the Interval within which the CircleData is defined
circles = Views.interval(raster, img)

IL.wrap(circles, "Circles").show()
					

  Saving measurements into a CSV file, and reading them out as a PointRoi

Let's learn how to save the data to a CSV file. There are multiple ways to do so.

In the Results Table window, choose "File - Save...", which will save the table data in CSV format. Doesn't get any easier than this!

In the absence of a Results Table, we can use python's built-in csv library.

First, we define two functions to provide the data (peakData and the helper function centerAt), so that (for simplicity and clarity) we separate getting the peak data from writing the CSV. To get the peak data, we define the function peakData that does the same as was done above in a for loop: localize the peak (which writes its coordinates into the float array p) and then sum the pixels around the peak using an interval view. The helper function centerAt returns two copied arrays with the two arrays (minC, maxC) that delimit the region of interest around the peak translated to the peak.

Then, we write the CSV file one row at a time. We open the file within python's with statement, which ensures that, even if an error was to come up, the file handle will be closed, properly releasing system resources. The csv.writer function returns an object w onto which we call writerow for every peak. Notice the arguments provided to csv.writer, defining the delimiter (a comma, a space, a tab...), the quote character (for strings), and what to quote (everything that is not a number). The first row is the header, containing the titles of each column in the CSV file. Then each data row is written by providing writerow with the list of column entries to write: x, y and s, which is the sum of pixel values within the interval around x,y.

For completeness, I am showing here how to read the CSV file back into, in this example, a PointRoi, using the complementary function csv.reader. Note that numeric values are read in as strings, and must be transformed into floating-point numbers using the built-in function float.

from __future__ import with_statement
# IMPORTANT: imports from __future__ must go at the top of the file.

#
# ... same code as above here to obtain the peaks
#

from operator import add
import csv

# The minumum and maximum coordinates, for each image dimension,
# defining an interval within which pixel values will be summed.
minC = [-sigmaSmaller for i in xrange(img.numDimensions())]
maxC = [ sigmaSmaller for i in xrange(img.numDimensions())]

def centerAt(p, minC, maxC):
  """ Translate the minC, maxC coordinate bounds to the peak. """
  return map(add, p, minC), map(add, p, maxC)

def peakData(peaks, p, minC, maxC):
  """ A generator function that returns all peaks and their pixel sum,
      one at a time. """
  for peak in peaks:
    peak.localize(p)
    minCoords, maxCoords = centerAt(p, minC, maxC)
    fov = Views.interval(img, minCoords, maxCoords)
    s = sum(t.getInteger() for t in fov)
    yield p, s

# Save as CSV file
with open('/tmp/peaks.csv', 'wb') as csvfile:
  w = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_NONNUMERIC)
  w.writerow(['x', 'y', 'sum'])
  for p, s in peakData(peaks, p, minC, maxC):
    w.writerow([p[0], p[1], s])

# Read the CSV file into an ROI
roi = PointRoi(0, 0)
with open('/tmp/peaks.csv', 'r') as csvfile:
  reader = csv.reader(csvfile, delimiter=',', quotechar='"')
  header = reader.next() # advance reader by one line
  for x, y, s in reader:
    roi.addPoint(imp, float(x), float(y))

imp.show()
imp.setRoi(roi)
					

Transform an image using ImgLib2.

In this example, we will use ImgLib2's RealViews namespace to transform images with affine transforms: translate, rotate, scale, shear.

Let's introduce the concept of a View in ImgLib2: it's like a shallow copy, possibly transformed. Meaning, the underlying pixel array is not duplicated, with merely a transformation of some sort being applied to the pixels on the fly as these are requested. Views can be concatenated.

Here we use:

  • Views.extendZero: takes a finite image and returns a view that returns the proper pixel values within the image, but a pixel value of zero beyond its edges.
  • Views.interpolate: enables retrieving pixel values for fractional coordinates (i.e. non-integer coordinates) with the help of an interpolation strategy, such as the NLinearInterpolatorFactory. Returns images of the RealRandomAccessible type, suitable for transformations.
  • RealViews.transform: views an image as transformed by the provided transformation, such as an affine transform. Operates on images that are RealRandomAccessible, such as those returned by Views.interpolate.
  • Views.interval: takes an infinite image (generally an infinite View) and adds limits to it, defining specific intervals in each of its dimensions within which the image is said to be defined. This is what we use to "crop" or to select a specific field of view. If the field of view includes regions outside the originally wrapped image, then it'd better be "filled in" with a Views.extend (like Views.extendZero) or it will fail with out of bounds exception when a user of the returned interval attemps to get pixels from such "outside" regions.

While the reasons that led to split the functionality into two separate namespaces (the Views and the RealViews) don't matter, the basic heuristic when looking up for a View method is that we'll use Views when the interval is defined (that is, the image data is known to exist within a specific range between 0 and width, height, depth, etc., which is almost always), and we'll use RealViews when the interval is not defined and pixels can be retrieved with real numbers, that is, floating point numbers (such as when applying affine transforms or performing interpolations).

In the end, we call ImageJFunctions.wrap again to visualize the transformed image as a regular ImageJ's ImagePlus containing a VirtualStack whose pixel source is the scaled up View, whose pixel source, in turn, is the original ImagePlus. No data has been duplicated at any step!

from net.imglib2.realtransform import RealViews as RV
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from net.imglib2.realtransform import Scale
from net.imglib2.view import Views
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from ij import IJ

# Load an image (of any dimensions)
imp = IJ.getImage()

# Access its pixel data as an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with a value of zero beyond the image edges
imgE = Views.extendZero(img)

# View the pixel data as a RealRandomAccessible
# (that is, accessible with sub-pixel precision)
# by using an interpolator
imgR = Views.interpolate(imgE, NLinearInterpolatorFactory())

# Obtain a view of the 2D image twice as big
s = [2.0 for d in range(img.numDimensions())] # as many 2.0 as image dimensions
bigger = RV.transform(imgR, Scale(s))

# Define the interval we want to see: the original image, enlarged by 2X
# E.g. from 0 to 2*width, from 0 to 2*height, etc. for every dimension
minC = [0 for d in range(img.numDimensions())]
maxC = [int(img.dimension(i) * scale) for i, scale in enumerate(s)]
imgI = Views.interval(bigger, minC, maxC)

# Visualize the bigger view
imp2x = IL.wrap(imgI, imp.getTitle() + " - 2X") # an ImagePlus
imp2x.show()
          

 


At any time, use e.g. print type(imgR) to see the class of e.g. the object imgR. Then, either look it up in the ImgLib2's github repositories or in Google, or perhaps sufficiently, use print dir(imgR) to list all its accessible methods.


While the code in this example applies to images of any number of dimensions (2D, 3D, 4D) and type (8-bit, 16-bit, 32-bit, others), here we scale by a factor of two the boats example ImageJ image.


print type(imgR)
print dir(imgR)
					

<type 'net.imglib2.interpolation.Interpolant'>

['__class__', '__copy__', '__deepcopy__', '__delattr__', '__doc__',
'__ensure_finalizer__', '__eq__', '__format__', '__getattribute__',
'__hash__', '__init__', '__ne__', '__new__', '__reduce__', '__reduce_ex__',
'__repr__', '__setattr__', '__str__', '__subclasshook__', '__unicode__',
'class', 'equals', 'getClass', 'getInterpolatorFactory', 'getSource',
'hashCode', 'interpolatorFactory', 'notify', 'notifyAll', 'numDimensions',
'realRandomAccess', 'source', 'toString', 'wait']
					

The resulting ImagePlus can be saved using ImageJ's FileSaver methods, just like any other ImageJ image.

from ij.io import FileSaver

FileSaver(imp2x).saveAsPng("/path/to/boats-2x.png")
					

Rotating image volumes with ImgLib2.

Now we continue with a rotation around the Z axis (rotation in XY) by 30 degrees. Remember, this code applies to images of any number of dimensions: would work equally well as is for the boats image example above.

The rotation must be defined as the values of a matrix that describes an affine transform. For convenience, I use here the java.awt.geom.AffineTransform (aliased as Affine2D) to obtain the values of the rotation transform. Then these are transferred to a JaMa Matrix, which the ImgLib2's AffineTransform class takes as argument for its constructor. The matrix has to have one more column than rows, with the last column defining the translation. (The last row would be all zeros and a 1.0 at the end, so it is omitted.) Notice that the rest of the diagonal of the matrix is filled with 1.0 in the loop, for as many dimensions as the image has.

Then we view the rotated image as an ImagePlus that wraps a VirtualStack just like above. Of course, the rotated image is cropped: when rotating relative to the center, the center stays within the field of view, but the corners disappear. Below, we instead view an enlarged interval that fully contains the rotated image. (In this particular example the effect is not very visible because the MRI stack of a human head has black corners. To reveal the issue, I draw a white line along the borders beforehand by pushing 'a' to select all with a rectangular ROI, then choosing white color for the foreground color, and then pushing 'd' to draw it, confirming the dialog to draw in every section.)

 

from net.imglib2.realtransform import RealViews as RV
from net.imglib2.realtransform import AffineTransform
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from ij import IJ
from net.imglib2.view import Views
from net.imglib2.interpolation.randomaccess import NLinearInterpolatorFactory
from java.awt.geom import AffineTransform as Affine2D
from java.awt import Rectangle
from Jama import Matrix
from math import radians

# Load an image (of any dimensions)
imp = IJ.getImage()

# Access its pixel data as an ImgLib2 RandomAccessibleInterval
img = IL.wrapReal(imp)

# View as an infinite image, with value zero beyond the image edges
imgE = Views.extendZero(img)

# View the pixel data as a RealRandomAccessible
# (that is, accessible with sub-pixel precision)
# by using an interpolator
imgR = Views.interpolate(imgE, NLinearInterpolatorFactory())

# Define a rotation by +30 degrees relative to the image center in the XY axes
# (not explicitly XY but the first two dimensions)
# by filling in a rotation matrix with values taken
# from a java.awt.geom.AffineTransform (aliased as Affine2D)
# and by filling in the rest of the diagonal with 1.0
# (for the case where the image has more than 2 dimensions)
angle = radians(30)
rot2d = Affine2D.getRotateInstance(
  angle, img.dimension(0) / 2, img.dimension(1) / 2)
ndims = img.numDimensions()
matrix = Matrix(ndims, ndims + 1)
matrix.set(0, 0, rot2d.getScaleX())
matrix.set(0, 1, rot2d.getShearX())
matrix.set(0, ndims, rot2d.getTranslateX())
matrix.set(1, 0, rot2d.getShearY())
matrix.set(1, 1, rot2d.getScaleY())
matrix.set(1, ndims, rot2d.getTranslateY())
for i in range(2, img.numDimensions()):
  matrix.set(i, i, 1.0)

print matrix.getArray()

# Define a rotated view of the image
rotated = RV.transform(imgR, AffineTransform(matrix))

# View the image rotated, without enlarging the canvas
# so we define the interval as the original image dimensions.
# (Notice the -1 on the max coordinate: the interval is inclusive)
minC = [0 for i in range(img.numDimensions())]
maxC = [img.dimension(i) -1 for i in range(img.numDimensions())]
imgRot2d = IL.wrap(Views.interval(rotated, minC, maxC),
  imp.getTitle() + " - rot2d")
imgRot2d.show()

# View the image rotated, enlarging the interval to fit it.
# (This is akin to enlarging the canvas.)
# We compute the bounds of the enlarged canvas by transforming a rectangle,
# then define the interval min and max coordinates by subtracting
# and adding as appropriate to exactly capture the complete rotated image.
# Notice the min coordinates have negative values, as the rotated image
# has pixels now somewhere to the left and up from the top-left 0,0 origin
# of coordinates.
bounds = rot2d.createTransformedShape(
  Rectangle(img.dimension(0), img.dimension(1))).getBounds()
minC[0] = (img.dimension(0) - bounds.width) / 2
minC[1] = (img.dimension(1) - bounds.height) / 2
maxC[0] += abs(minC[0]) -1 # -1 because its inclusive
maxC[1] += abs(minC[1]) -1
imgRot2dFit = IL.wrap(Views.interval(rotated, minC, maxC),
  imp.getTitle() + " - rot2dFit")
imgRot2dFit.show()
					

To read out the values of the transformation matrix that specifies the rotation, print it: it's an array of arrays. Or pretty-print it with pprint, which requires turning the inner arrays into lists for nicer printing.

Given the desired 30 degree rotation, the "scale" part (the diagonal) becomes the cosine of 30 degrees (sqrt(3)/2 = 0.866), and the "shear" part (the second column of the first row, and the first column of the second row) becomes the sine of 30 degrees (0.5) with the appropriate sign (to the "left" for X, hence negative; and to the "right" for Y, hence positive). The third column contains the translation values corresponding to a rotation specified relative to the center of the image. While you could always write in the matrix by hand, it is better to use libraries like, for 2D, the java.awt.geom.AffineTransform and its methods such as getRotateInstance. For 3D rotations and affine transformations in general, use e.g. javax.media.j3d.Transform3D and e.g. its method rotZ, which sets the transform to mean a rotation in the Z axis, as done in this example.


print matrix.getArray()

from pprint import pprint
pprint([list(row) for row in matrix.getArray()])
					

array([D, [array('d', [0.8660254037844387, -0.49999999999999994, 0.0,
68.95963744804719]), array('d', [0.49999999999999994, 0.8660254037844387,
0.0, -31.360870627641567]), array('d', [0.0, 0.0, 1.0, 0.0])])

[[0.8660254037844387,  -0.49999999999999994, 0.0,  68.95963744804719],
 [0.49999999999999994,  0.8660254037844387,  0.0, -31.360870627641567],
 [0.0,                  0.0,                 1.0,  0.0]]
					

Processing RGB and ARGB images with ImgLib2.

An ARGB image is a hack: the four color channels have been stored each in one of the 4 bytes of a 32-bit integer. Processing directly the pixel array, made of integers, makes no sense at all. Prior to any processing, color channels must be separated.

For reference, the alpha channel is in the upper byte (index 0), the red in the 2nd (index 1), the green in the 3rd (index 2) and blue in the lowest byte, the 4th (index 3).

In ImgLib2, rather than copying a color channel into a new image with a new array of bytes, we acquire a View of its channels: by using the Converters functions, optionally together with the Views.hyperSlice functionality.

First, we load an RGB or ARGB image and wrap it as an ImgLib2 object (despite what IL.wrapRGBA seems to imply, the alpha channel is still at index 0). If the ImagePlus is not backed by a ColorProcessor, it will throw an error.

Then we invoke one of the several functions in the Converters namespace that handles ARGB images. Here, we use Converters.argbChannels, which delivers a view of the ARGB image as a stack of 4 images, one per channel. The channels image is equivalent to ImageJ's CompositeImage, in that each channel can be processed independently.

To read out a single channel, e.g. the red channel (index 1), we could use Converters.argbChannel(img, 1). Or, as we illustrate here, use Views.hyperSlice: a function to reduce the dimensionally of an image, in this case by fixing the last dimension (the channels) to always be the red channel (at index 1).

Of course, this code runs on 2D images (e.g. the leaf) or 3D images (e.g. the Drosophila larval brain LSM stack), or 4D images, or images of any dimensions.

from net.imglib2.converter import Converters
from net.imglib2.view import Views
from net.imglib2.img.display.imagej import ImageJFunctions as IL
from ij import IJ

# # Load an RGB or ARGB image
imp = IJ.getImage()

# Access its pixel data from an ImgLib2 data structure:
# a RandomAccessibleInterval
img = IL.wrapRGBA(imp)

# Convert an ARGB image to a stack of 4 channels: a RandomAccessibleInterval
# with one more dimension that before.
# The order of channels in the stack can be changed by changing their indices.
channels = Converters.argbChannels(img, [0, 1, 2, 3])

# Equivalent to ImageJ's CompositeImage: channels are separate
impChannels = IL.wrap(channels, imp.getTitle() + " channels")
impChannels.show()

# Read out a single channel directly
red = Converters.argbChannel(img, 1)

# Alternatively, pick a view of the red channel in the channels stack.
# Takes the last dimension, which are the channels,
# and fixes it to just one: that of the red channel (1) in the stack.
red = Views.hyperSlice(channels, channels.numDimensions() -1, 1)

impRed = IL.wrap(red, imp.getTitle() + " red channel")
impRed.show()
					

  

  

 
 

Analysis of 4D volumes with ImgLib2: GCaMP imaging data

In neuroscience, we can observe the activity of neurons in a circuit by expressing, for example, a calcium sensor in every neuron of interest, generally using viruses as delivery vectors for mammals, birds and reptiles, or genetic constructs for the fruit fly Drosophila, the nematode C. elegans and zebrafish. From the many options available, we'll use data here from those called genetically encoded calcium indicators (GECI), the most widely used being GCaMP.

Here is a copy of the first 10 time points to try out the scripts below. For testing, I used only the first two of these ten.

GCaMP time series data comes in many forms. Here, I am using a series of 3D volumes, each volume saved a single, separate file, representing a single time point of the neuronal activity data. These files were acquired by Nadine Randel and Rahghav Chhetri with the IsoView microscope by the Keller lab at HHMI Janelia. The file format, KLB, is a compressed open source format for which a library exists (klb-bdv) in Fiji: enable the "SiMView" update site in the "Help - Update Fiji" settings.

Opening KLB-formatted stacks is easy: we merley use the KLB library. Given that the library is optional, I wrapped it in a try statement to warn the user about is absence if so.

Each KLB stack file is compressed, so its size in disk can be misleading: a single 40 MB file may unpack into a 180 MB stack in computer memory. The decompression process is also costly. Therefore, we need a way to minimize the number of times we load each stack. To this end, I define a cache strategy known as memoization: the result of invoking a function with a specific set of arguments is stored, and if the function is called again with the same arguments, the stored result is returned right away. To prevent filling up all RAM, we can define a maximum amount of items to store using the keyword argument maxsize, which defaults here to 30 but we set to be 10. Which ones should be thrown out first? The specific implementation here is an LRU: the least recently used is thrown out first.

Note the make_synchronized function decoration on the Memoize.__call__ method: when multiple threads access the cache they will have to wait on each other (i.e.. synchronize their access to the method), to avoid simultaneously loading multiple times an individual volume that isn't yet cached. This is crucial not just for cache correctness, but also for good performance of e.g. the BigDataViewer, which uses multiple threads for rendering images. (Read more about thread concurrency and synchronization in jython.)

Note that in python 3 (not available from java so far), we could merely decorate the openStack function with functools.lru_cache, sparing us from having to create our own LRU cache.

  Representing the whole 4D series as an ImgLib2 image

To ease processing the 4D series we take full advantage of the ImgLib2 library capabilities to abstract over data sources and type, and represent the whole data set as a single image, vol4d. We accomplish this feat by using a LazyCellImg: an Img type (a fancy RandomAccessibleInterval) that enables us to only load what we access, while still pretending to be handling the whole 4D data set.

First, we define dimensions of the data, by reading the first stack. We assume that all stacks--one per time point--have the same spatial dimensions.

Then, we define the dimensions of vol4d: same as those of each time point, plus the 4th axis represeting time.

Then, we define the CellGrid: we specify how each piece (each independent stack) fits into the overall continous volume (the whole 4D series). Basically, the grid is a simple linear arrangement--in time--of individual 3D stacks.

Then we define how each Cell of the grid is loaded: each cell is merely a single 3D stack for a single timepoint. We do so with the class TimePointGet, which implements the Get interface defined inside the LazyCellImg class.

The method TimePointGet.get loads the stack--via our memoized function--and wraps it in a Cell object, with the latter having 4 dimensions: the 3 of the volume plus time, with the time dimension being of size 1: each stack represents a single time point in the 4D series.

Finally we define vol4d as a LazyCellImg, taking as argument the grid, the type (in this case, an implicit UnsignedShortType since KLB data is in 16-bit), and the cell loader.

Note that via the Converters we could be loading e.g. 16-bit data (like the KLB stacks) and converting it on the fly (no copy in RAM ever) as whatever type we'd like, such as unsigned byte, unsigned int, unsigned long, float, double or whatever we wanted, as made possible by the corresponding types available in ImgLib2.

 
 

from net.imglib2.img.cell import LazyCellImg, CellGrid, Cell
from net.imglib2.util import Intervals, IntervalIndexer
from net.imagej import ImgPlus
import os, sys
from collections import OrderedDict
from synchronize import make_synchronized

# Attempt to load the KLB library
# Must have enabled the "SiMView" update site from the Keller lab at Janelia
try:
  from org.janelia.simview.klb import KLB 
  klb = KLB.newInstance()
except:
  print "Could not import KLB file format reader."
  klb = None

# Source directory containing a list of files, one per stack
src_dir = "/home/albert/lab/scripts/data/4D-series/"
# You could also use a dialog to choose the directory
#from ij.io import DirectoryChooser
#dc = DirectoryChooser("Pick source directory")
#src_dir = dc.getDirectory()

# Each timepoint is a path to a 3D stack file
timepoint_paths = sorted(os.path.join(src_dir, name)
                         for name in os.listdir(src_dir)
                         if name.endswith(".klb"))

# Automatic LRU cache with limited storage:
class Memoize:
  def __init__(self, fn, maxsize=30):
    self.fn = fn           # The function to execute
    self.m = OrderedDict() # The cache
    self.maxsize = maxsize # The maximum number of items in the cache
  @make_synchronized
  def __call__(self, key):
    o = self.m.get(key, None)
    if o:
      self.m.pop(key) # Remove
    else:
      o = self.fn(key) # Invoke the memoized function
    self.m[key] = o # Store, as the last (newest) entry
    if len(self.m) > self.maxsize: # Trim cache
      self.m.popitem(last=False) # Remove first entry (the oldest)
    return o

# Function to open a single stack
def openStack(filepath):
  return klb.readFull(filepath)
  # If your files are e.g. TIFF stacks, use instead:
  #return IJ.openImage(filepath)

# Memoize the stack loading function for efficiency
getStack = Memoize(openStack, maxsize=10)

# Helper function to find the e.g. ShortArray object
# that wraps the short[] array containing the pixels.
# Each KLB file is opened as an ImgPlus, which wraps
# in this case an ArrayImg from which we obtain its ShortArray
def extractDataAccess(img, dimensions):
  if isinstance(img, ImgPlus):
    return extractDataAccess(img.getImg(), dimensions)
  try:
    return img.update(None) # a ShortArray holding pixel data for an ArrayImg
  except:
    print sys.exc_info()

# Open the first time point stack to read out the dimensions
# (It gets cached, so it is not time wasted)
first = getStack(timepoint_paths[0])

# One cell per time point
dimensions = [1 * first.dimension(0),
              1 * first.dimension(1),
              1 * first.dimension(2),
              len(timepoint_paths)]

cell_dimensions = dimensions[0:3] + [1]

# The grid: how each independent stack fits into the whole continuous volume
grid = CellGrid(dimensions, cell_dimensions)

# A class to retrieve each time point
# Each returned Cell has 4 dimensions: the 3 for the volume, plus time
class TimePointGet(LazyCellImg.Get):
  def __init__(self, timepoint_paths, cell_dimensions):
    self.timepoint_paths = timepoint_paths
    self.cell_dimensions = cell_dimensions
  def get(self, index):
    img = getStack(self.timepoint_paths[index])
    return Cell(self.cell_dimensions,
                [0, 0, 0, index],
                extractDataAccess(img, self.cell_dimensions))

vol4d = LazyCellImg(grid,
                    first.randomAccess().get().createVariable(),
                    TimePointGet(timepoint_paths, cell_dimensions))
					

 
 

  Visualizing the whole 4D series

With the vol4d variable now describing our entire 4D data set, we proceed to visualize it. There are multiple ways to do so.

  1. Trivially as an ImageJ CompositeImage containing a VirtualStack, using the ImageJFunctions.wrap method.  
     
     
  2. By creating both the CompositeImage and VirtualStack by hand, which afforts more flexibility: we could, if we wanted, change the pixel type, or preprocess it in any way we wanted, even changing the dimensions by cropping or enlarging each slice. We could also insert slices as desired to e.g. represent two channels: the data and a segmentation--more on this below.
     
    To accomplish this, we require a fast way to copy pixels from a hyperslice of the 4D volume (obtained via Views.hyperslice, twice: once for time, another for Z, to extract a 2D slice from a 4D voume) into an ImageJ ShortProcessor. I am using here a trick that shouldn't be used much: embedding java code inside a python script, just for the tight loop. This is done via the Weaver.method static function to define a java method that copies the data from one Cursor (of the hyperslice) to another (of an ArrayImg created via the convenient ArrayImgs.unsignedShorts method).
     
    The Weaver.method is currently limited to a compiler without java generics, so the code is less idiomatic than it should be. Despite the ugly type casts, at runtime these are erased and therefore the code will perform just as fast as more modern java code with generics.
     
    In addition, in order to run a python script that inlines java code, you will need the java compiler library in the java class path. This is accomplished by e.g. launching Fiji with the tools.jar in the classpath, like this (adjusting for the location of the tools.jar in your system):
    $ ./ImageJ-linux64 --class-path \
    /usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar
     
     
     
     
     
     
     
     
     
     
     
     
     
  3. By using the BigDataViewer framework, which opens the 4D volume as a window with a slider for time at the bottom, and each volume at each time point can be resliced arbitrarily. Note that you'll have to adjust the brightness and contrast through its own menus, as the default is way off from the 16-bit range of the data.
    This framework, particularly via the simple interface offered by the BdvFunctions class of the bigdataviewer-vistools library (a replacement for the ImageJFunctions library), enables us to e.g. add additional volumes overlaid on the 4D data, and more (see below).

# Visualization option 1:
# An automatically created 4D VirtualStack

from net.imglib2.img.display.imagej import ImageJFunctions as IL

IL.wrap(vol4d, "Volume 4D").show()


# Visualization option 2:
# Create a 4D VirtualStack manually

from net.imglib2.view import Views
from net.imglib2.img.array import ArrayImgs
from net.imglib2.img.basictypeaccess import ShortAccess
from ij import VirtualStack, ImagePlus, CompositeImage
from jarray import zeros, array
from ij.process import ShortProcessor
from fiji.scripting import Weaver
from net.imglib2 import Cursor
from net.imglib2.type.numeric.integer import UnsignedShortType

# Need a fast way to copy pixel-wise
w = Weaver.method("""
  static public final void copy(final Cursor src, final Cursor tgt) {
    while (src.hasNext()) {
      src.fwd();
      tgt.fwd();
      final UnsignedShortType t1 = (UnsignedShortType) src.get(),
                              t2 = (UnsignedShortType) tgt.get();
      t2.set(t1.get());
    }
  }
""", [Cursor, UnsignedShortType])

class Stack4D(VirtualStack):
  def __init__(self, img4d):
    super(VirtualStack, self).__init__(img4d.dimension(0), img4d.dimension(1),
                                       img4d.dimension(2) * img4d.dimension(3))
    self.img4d = img4d
    self.dimensions = array([img4d.dimension(0), img4d.dimension(1)], 'l')
    
  def getPixels(self, n):
    # 'n' is 1-based
    # Obtain a 2D slice from the 4D volume
    aimg = ArrayImgs.unsignedShorts(self.dimensions[0:2])
    nZ = self.img4d.dimension(2)
    fixedT = Views.hyperSlice(self.img4d, 3, int((n-1) / nZ)) # Z blocks
    fixedZ = Views.hyperSlice(fixedT, 2, (n-1) % nZ)
    w.copy(fixedZ.cursor(), aimg.cursor())
    return aimg.update(None).getCurrentStorageArray()
    
  def getProcessor(self, n):
    return ShortProcessor(self.dimensions[0], self.dimensions[1],
                          self.getPixels(n), None)

imp = ImagePlus("vol4d", Stack4D(vol4d))
nChannels = 1
nSlices = first.dimension(2)
nFrames = len(timepoint_paths)
imp.setDimensions(nChannels, nSlices, nFrames)

com = CompositeImage(imp, CompositeImage.GRAYSCALE)
com.show()



# Visualization option 3: BigDataViewer

from bdv.util import BdvFunctions

bdv = BdvFunctions.show(vol4d, "vol4d")

					

 

 

  Nuclei detection with difference of Gaussian

Once the 4D volume is loaded, we proceed to detect nuclei at every timepoint, with each timepoint being a 3D volume. For this, we'll use again the difference of Gaussian with the DogDetection class.

Critical to the success of the difference of Gaussian approach to nuclei detection is the choosing of good values for the parameters sigmaLarger, sigmaSmaller, and minPeakValue.

The difference of Gaussian approach works quite well when the data resembles circles or spheres, even when these are in contact, if they all have approximately the same dimensions. This is the case here, since neuron nuclei in Drosophila larvae are all about 5 micrometers in diameter. Here, the image volumes are uncalibrated (value of 1.0 for pixel width, pixel height, pixel depth), and I chose 5 pixels (half the diameter of the average-looking nucleus) as the sigmaLarger. In practice, half works better, as nuclei are not perfect spheres and I suspect that capturing the difference of Gaussian from a range entirely enclosed within the boundaries of a nucleus works best. For sigmaSmaller I chose half the value of sigmaLarge, which works well in practice. The operation consists of subtracting a flatter Gaussian from a sharper one, narrowing any occurring intensity peak (see wikipedia).

The minPeakValue parameter is used to set a threshold for considering a peak as valid. To estimate a good value for minPeakValue, try one of these two approaches:

  1. Manually: adjust the display brightness and contrast, duplicate a slice twice, and apply a Gaussian with sigmaSmaller to one copy and with sigmaLarger to the other. Then use Fiji/ImageJ's "Process - Image Calculator..." to subtract the smaller from the larger, choosing to create a new image in 32-bit depth. Move the mouse over nuclei-looking whiteish blobs, and see what is the largest value there (it is printed under the Fiji/ImageJ toolbar as the mouse moves over the image). Do so for a few nuclei, particularly for nuclei that appear as large as nuclei can be (not just partially appearing in the optical section that the stack slice represents), and you'll get a sensible estimate for minPeakValue.
  2. Automatically: run the createDoG function for a range of values, and store the number of peaks found. If you plot the number of peaks as a function of the minPeakValue, you'll see an inflexion point near the good value, with the number of detected peaks not changing much or at all for a range of continuous values for minPeakValue, that will suggest a good numeric value for minPeakValue. This can be computationally quite expensive, but it is worth it.

In this example, I manually chose a minPeakValue. Estimating how wrong I was, using the automatic method, is left as an exercise for the reader.

 

from net.imglib2.algorithm.dog import DogDetection
from collections import defaultdict

vol4d = ... # NOTE, this variable was created above and represents the 4D series

# Parameters for a Difference of Gaussian to detect nuclei positions
calibration = [1.0 for i in range(vol4d.numDimensions())] # no calibration: identity
sigmaSmaller = 2.5 # in pixels: a quarter of the radius of a neuron nuclei
sigmaLarger = 5.0  # pixels: half the radius of a neuron nuclei
minPeakValue = 100

# A function to create a DogDetection from an img
def createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue):
  # Fixed parameters
  extremaType = DogDetection.ExtremaType.MAXIMA
  normalizedMinPeakValue = False
  # Infinite img
  imgE = Views.extendMirrorSingle(img)
  # In the differece of gaussian peak detection, the img acts as the interval
  # within which to look for peaks. The processing is done on the infinite imgE.
  return DogDetection(imgE, img, calibration, sigmaLarger, sigmaSmaller,
    extremaType, minPeakValue, normalizedMinPeakValue)

def getDoGPeaks(timepoint_index, print_count=True):
  # From the cache
  img = getStack(timepoint_paths[timepoint_index])
  # or from the vol4d (same thing)
  #img = Views.hyperslice(vol4d, 3, i)
  dog = createDoG(img, calibration, sigmaSmaller, sigmaLarger, minPeakValue)
  peaks = dog.getSubpixelPeaks() # could also use getPeaks() in integer precision
  if print_count:
    print "Found", len(peaks), "peaks in timepoint", timepoint_index
  return peaks

# Map of timepoint indices and lists of DoG peaks in timepoint-local 3D coordinates
nuclei_detections = {ti: getDoGPeaks(ti) for ti in xrange(vol4d.dimension(3))}
					

  Visualizing detected peaks (nuclei) with a dynamically adjusted PointRoi

It is necessary to check how well we did in detecting nuclei. When there are thousands, this task can be onerous. For a quick look, we could display every detection as a point in a PointRoi. Here is how.

We define the class PointRoiRefresher which implements the ImageListener interface. Then we instantiate it with the nuclei_detections dictionary, and add it as an ImagePlus listener (using the Observer pattern [wikipedia]) via the static method ImagePlus.addImageListener.

In the PointRoiRefresher constructor, we loop through every timepoint and peaks pair via the dictionary iteritems method. Then, for every peak found in each timepoint 3D volume, we store its 2D coordinates in an array for the corresponding slice in the overall vol4d volume, which is stored in the self.nuclei dictionary. Note how we use the zOffset to account for the slices in vol4d from the volumes of prior timepoints.

Notice we use a defaultdict for the self.nuclei: useful to avoid the nuisance of having to check if a list of points has already been created and inserted into the dictionary for a specific slice index. With defaultdict, upon requesting the value for a key that doesn't yet exist, the key/value pair is inserted with a new instance of the default value (here, an empty list), which is also returned and can be used right away. Any other default value (other than list) is possible for defaultdict

Importantly, note that we remove the listener when the image is closed, as it wouldn't make sense to keep it around in computer memory.

Note the pass keyword for the imageOpened method: it means that there isn't a body for this function, i.e. does nothing.

Now, whenever you browse the Z axis, a new PointRoi is set on the image window of the vol4d, showing the detected nuclei. Neat! Of course, nuclei span multiple sections, so you'll have to scroll back and forth to make sure that an apparently undetected nuclei wasn't detected in another stack slice.

# Visualization 1: with a PointRoi for every vol4d stack slice,
#                  automatically updated when browsing through slices.

from ij import ImageListener, ImagePlus
from ij.gui import PointRoi
from java.awt import Color
from collections import defaultdict
import sys

# Variables created above: (paste this code under the script above)

vol4d = ... # NOTE, this variable was created above and represents the 4D series

com = ... # CompositeImage holding the vol4d
          # It's an ImagePlus, so you can get it again with IJ.getImage()

nuclei_detections = ... # dictionary of timepoint indices vs DoG peaks

# Create a listener that, on slice change, updates the ROI
class PointRoiRefresher(ImageListener):
  def __init__(self, imp, nuclei_detections):
    self.imp = imp
    # A map of slice indices and 2D points, over the whole 4d volume
    self.nuclei = defaultdict(list)  # Any query returns at least an empty list
    p = zeros(3, 'f')
    for ti, peaks in nuclei_detections.iteritems():
      # Slice index offset, 0-based, for the whole timepoint 3D volume
      zOffset = ti * vol4d.dimension(2)
      for peak in peaks: # peaks are float arrays of length 3
        peak.localize(p)
        self.nuclei[zOffset + int(p[2])].append(p[0:2])
  def imageOpened(self, imp):
    pass
  def imageClosed(self, imp):
    if imp == self.imp:
      imp.removeImageListener(self)
  def imageUpdated(self, imp):
    if imp == self.imp:
      self.updatePointRoi()
  def updatePointRoi(self):
    # Surround with try/except to prevent blocking
    #   ImageJ's stack slice updater thread in case of error.
    try:
      # Update PointRoi
      self.imp.killRoi()
      points = self.nuclei[self.imp.getSlice() -1] # map 1-based slices
                                                   # to 0-based nuclei Z coords
      if 0 == len(points):
        IJ.log("No points for slice " + str(self.imp.getSlice()))
        return
      # New empty PointRoi for the current slice
      roi = PointRoi()
      # Style: large, red dots
      roi.setSize(4) # ranges 1-4
      roi.setPointType(2) # 2 is a dot (filled circle)
      roi.setFillColor(Color.red)
      roi.setStrokeColor(Color.red)
      # Add points:
      for point in points: # points are floats
        roi.addPoint(self.imp, int(point[0]), int(point[1]))
      self.imp.setRoi(roi)
    except:
      IJ.error(sys.exc_info())

listener = PointRoiRefresher(com, nuclei_detections)
ImagePlus.addImageListener(listener)
					

 

  Visualizing detected peaks (nuclei) with 3D spheres

Manually checking whether all nuclei were detected is very time consuming, and error prone. Instead, we could render a 3D volume with a black background where spheres are painted white, with the average radius of the nuclei, at the coordinates of the detected nuclei. We will use these generated spheres as a second channel (e.g. red), visualizing overlaps in yellow. While this method is not foolproof either, the existence of spheres which paint beyond the single Z section where the center is placed makes it harder to miss cells when visually inspecting an area. It's also better for spotting false detections.

The second channel with the spheres consists of on-the-fly generated images, based on a KDTree: a data structure for fast lookup of spatial coordinates (see example above for explanations). While we could render them into 3D array-based volumes, the tradeoff is one of memory usage for CPU processing. Individual 2D slices are comparatively tiny, and very fast to compute using the kdtrees.

First we convert the nuclei_detections into kdtrees for fast lookup. Notice that we provide the nuclei spatial coordinates themselves (the peaks) as both values and coordinates; that's because here we don't care about the returned value at a peak location; we'll be doing distance queries, returning inside for a volume around the peak.

Then we define a color for inside (white) and outside (black) of the spheres, placing a sphere at the spatial coordinate of each putative nuclei detection.

Then we define the class Spheres and SpheresData, which are almost identical to the classes Circles and CircleData used in the generative image example above. In brief, these two classes define the way by which data in space is generated: when within a radius of a nuclei detection, paint white (i.e. return an inside value), otherwise paint black (i.e. return an outside value).

Notice the use of the asterisk * for capturing multiple parameters into the args list in SpheresData. It's a shortcut that the python language allows us, here useful to then invoke the Spheres constructors by unpacking all of them with the * again.

Finally, we define the virtual stack with the class Stack4DTwoChannels, which, at its getPixels method, divides the n (slice index) by two (there are two color channels, so twice as many virtual slices), and also tests for whether the requested n slice is even or odd, returning either an image (a 2D hyperslice of vol4d) or a rasterized and bounded 2D slice of the spheres that describe the nearby nuclei detections.

We then show the 2-channel 4D volume as a CompositeImage. The main advantage over the prior visualization with PointRoi is that the spheres span more than a single section, making it easier to visually evaluate whether what we think are active neuronal cell nuclei are all being detected. The blending of red with green colors results in yellow nuclei, leaving false positive detections as green only, and false negatives as red. Now it is evident that, at the bottom, there are several false detections in what looks like a bunch of bundled axons of a nerve; the PointRoi wasn't even showing them within the shown volume.

Compare the 2-channel version (left) with the PointRoi version (right):

 

Of course, the left panel being a CompositeImage, now you can open the "Image - Color - Channels tool..." and change the LUT (look up table) of each channel, so that instead of red and green, you set e.g. cyan and orange, which also result in great contrast. To do so, move the 'C' slider (either left or right: only two channels) and then push the "More" button of the "Channels Tool" dialog to select a different LUT.

# Visualization 2: with a 2nd channel where each detection is painted as a sphere

from net.imglib2 import KDTree, RealPoint, RealRandomAccess
from net.imglib2 import RealRandomAccessible, FinalInterval
from net.imglib2.neighborsearch import NearestNeighborSearchOnKDTree
from net.imglib2.type.numeric.integer import UnsignedShortType

# Variables from above

vol4d = ... # NOTE, this variable was created above and represents the 4D series

nuclei_detections = ... # dictionary of timepoint indices vs DoG peaks

w = ... # A Weaver object with a w.copy method (see above)

# A KDTree is a data structure for fast lookup of e.g. neareast spatial coordinates
# Here, we create a KDTree for each timepoint 3D volume
# ('i' is the timepoint index
kdtrees = {i: KDTree(peaks, peaks) for i, peaks in nuclei_detections.iteritems()}

radius = 5.0 # pixels

inside = UnsignedShortType(255) # 'white'
outside = UnsignedShortType(0)  # 'black'

# The definition of one sphere in 3D space for every nuclei detection
class Spheres(RealPoint, RealRandomAccess):
  def __init__(self, kdtree, radius, inside, outside):
    super(RealPoint, self).__init__(3) # 3-dimensional
    self.search = NearestNeighborSearchOnKDTree(kdtree)
    self.radius = radius
    self.radius_squared = radius * radius # optimization for the search
    self.inside = inside
    self.outside = outside
  def copyRealRandomAccess(self):
    return Spheres(3, self.kdtree, self.radius, self.inside, self.outside)
  def get(self):
    self.search.search(self)
    if self.search.getSquareDistance() < self.radius_squared:
      return self.inside
    return self.outside

# The RealRandomAccessible that wraps the Spheres, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
# NOTE: args are "kdtree, radius, inside, outside", using the * shortcut
#       given that this class is merely a wrapper for the Spheres class
class SpheresData(RealRandomAccessible):
  def __init__(self, *args): # captures all other arguments into args list
    self.args = args
  def realRandomAccess(self):
    return Spheres(*self.args) # Arguments get unpacked from the args list
  def numDimensions(self):
    return 3

# A two color channel virtual stack:
# - odd slices: image data
# - even slices: spheres (nuclei detections)
class Stack4DTwoChannels(VirtualStack):
  def __init__(self, img4d, kdtrees):
    # The last coordinate is the number of slices per timepoint 3D volume,
    # times the number of timepoints, times the number of channels (two)
    super(VirtualStack, self).__init__(img4d.dimension(0), img4d.dimension(1),
                                       img4d.dimension(2) * img4d.dimension(3) * 2)
    self.img4d = img4d
    self.dimensions = array([img4d.dimension(0), img4d.dimension(1)], 'l')
    self.kdtrees = kdtrees
    self.dimensions3d = FinalInterval([img4d.dimension(0),
                                       img4d.dimension(1),
                                       img4d.dimension(2)])
    
  def getPixels(self, n):
    # 'n' is 1-based
    # Target 2D array img to copy data into
    aimg = ArrayImgs.unsignedShorts(self.dimensions[0:2])
    # The number of slices of the 3D volume of a single timepoint
    nZ = self.img4d.dimension(2)
    # The slice_index if there was a single channel
    slice_index = int((n-1) / 2) # 0-based, of the whole 4D series
    local_slice_index = slice_index % nZ # 0-based, of the timepoint 3D volume
    timepoint_index = int(slice_index / nZ) # Z blocks
    if 1 == n % 2:
      # Odd slice index: image channel
      fixedT = Views.hyperSlice(self.img4d, 3, timepoint_index)
      fixedZ = Views.hyperSlice(fixedT, 2, local_slice_index)
      w.copy(fixedZ.cursor(), aimg.cursor())
    else:
      # Even slice index: spheres channel
      sd = SpheresData(self.kdtrees[timepoint_index], radius, inside, outside)
      volume = Views.interval(Views.raster(sd), self.dimensions3d)
      plane = Views.hyperSlice(volume, 2, local_slice_index)
      w.copy(plane.cursor(), aimg.cursor())
    #
    return aimg.update(None).getCurrentStorageArray()
    
  def getProcessor(self, n):
    return ShortProcessor(self.dimensions[0], self.dimensions[1], self.getPixels(n), None)


imp2 = ImagePlus("vol4d - with nuclei channel", Stack4DTwoChannels(vol4d, kdtrees))
nChannels = 2
nSlices = vol4d.dimension(2) # Z dimension of each time point 3D volume
nFrames = len(timepoint_paths) # number of time points
imp2.setDimensions(nChannels, nSlices, nFrames)
com2 = CompositeImage(imp2, CompositeImage.COMPOSITE)
com2.show()
					

  Improving performance of the generative Spheres volume

Turns out that Jython's performance, when it comes to pixel-wise operations, falls way below that of java or other, more JIT-friendly scripting languages. This performance drop led us, above, to use the Weaver.method approach to embedding a java implementation of a pixel-wise data copy from one image container to another.

Here, we run into another pixel-wise operation: the Spheres.get method is called for every pixel in the 2D plane that makes up a slice of the Stack4DTwoChannels VirtualStack. If you noticed that scrolling through stack sections of the 2-channel image was slow, it was, and it's Jython's fault.

The solution is to either use other JVM languages, such as Clojure, or to create a java library with the necessary functions, or, once again, to create an on-the-fly java solution, such as an implementation of the Spheres class in java, embedded inside our python script. Ugly, and requires you to know java reasonably well, but it gets the job done.

In the interest of brevity, I am showing here only the code that changes: the new declaration of the Spheres class, in java, and its use from the SphereData RealRandomAccessible wrapper class. The latter stays in jython: it does not perform any pixel-wise operations, so additional cost incurred by the jython environment is negligible.

Given that instantiating an inner class (which is what Spheres is here in this embedded java code snippet) from jython is tricky, I added the helper newSpheres method. All the SpheresData.realRandomAccess method has to do is invoke ws.newSpheres just like before (see above) it invoked the jython-only Spheres constructor.

If you are getting errors when running this code snippet:

  1. Make sure that the tools.jar is in your java class path, e.g. launching Fiji like this:
    $ ./ImageJ-linux64 --class-path \
    /usr/lib/jvm/java-8-openjdk-amd64/lib/tools.jar
  2. Perhaps your Fiji.app/jars/weave_jy2java.*jar file is not up to date. Given the issues with java 6 vs java 8, if you are running java 8 or higher (you should), you may have to install the weaver library from source (NOTE: update file paths as necessary):
    $ git clone https://github.com/fiji/weave_jy2java.git
    $ cd weave_jy2java/
    $ mvn -Dimagej.app.directory=/home/albert/Fiji.app/ \
    clean install

# Big speed up: define the Spheres class in java
ws = Weaver.method("""
public final class Spheres extends RealPoint implements RealRandomAccess {
  private final KDTree kdtree;
  private final NearestNeighborSearchOnKDTree search;
  private final double radius,
                       radius_squared;
  private final UnsignedShortType inside,
                                  outside;

  public Spheres(final KDTree kdtree,
                 final double radius,
                 final UnsignedShortType inside,
                 final UnsignedShortType outside)
  {
    super(3); // 3 dimensions
    this.kdtree = kdtree;
    this.radius = radius;
    this.radius_squared = radius * radius;
    this.inside = inside;
    this.outside = outside;
    this.search = new NearestNeighborSearchOnKDTree(kdtree);
  }

  public final Spheres copyRealRandomAccess() { return copy(); }

  public final Spheres copy() {
    return new Spheres(this.kdtree, this.radius, this.inside, this.outside);
  }

  public final UnsignedShortType get() {
    this.search.search(this);
    if (this.search.getSquareDistance() < this.radius_squared) {
      return inside;
    }
    return outside;
  }
}

public final Spheres newSpheres(final KDTree kdtree,
                                final double radius,
                                final UnsignedShortType inside,
                                final UnsignedShortType outside)
{
  return new Spheres(kdtree, radius, inside, outside);
}

""", [RealPoint, RealRandomAccess, KDTree,
      NearestNeighborSearchOnKDTree, UnsignedShortType])


# The RealRandomAccessible that wraps the Spheres, unbounded
# NOTE: partial implementation, unneeded methods were left unimplemented
# NOTE: args are "kdtree, radius, inside, outside", using the * shortcut
#       given that this class is merely a wrapper for the Spheres class
class SpheresData(RealRandomAccessible):
  def __init__(self, *args):
    self.args = args
  def realRandomAccess(self):
    # Performance improvement: a java-defined Spheres class instance
    return ws.newSpheres(*self.args) # Arguments get unpacked from the args list
  def numDimensions(self):
    return 3
					
  Visualizing detected peaks (nuclei) with 3D spheres with the BigDataViewer

Above, we used ImageJ's CompositeImage to visualize our VirtualStack. Here, we'll use the BigDataViewer framework, which offers arbitrary reslicing, and soon also (in an upcomming feature already available for 16-bit data) volume rendering for true GPU-accelerated 3D rendering with depth perception.

Just like before (see above), we visualize the vol4d trivially with the BdvFunctions class.

Then we create a second 4D volume for the nuclei detection. The nuclei detections of each time point are encoded each by its own KDTree (stored as values in the kdtrees dictionary). With the helper function asVolume, we use each KDTree to define spheres in 3D space using the SpheresData class and the given radius and inside/outside pixel values. Then we specify the bounds to match those of the first 3 dimensions of the vol4d (i.e. the dimensions of the 3D volume of each time point) on a rasterized view (i.e. iterable with integer coordinates). Finally, via Views.stack we express a sequence of 3D volumes as a single 4D volume named spheres4d.

Then we add spheres4d as a second data set to the bdv BigDataViewer instance.

We adjust the intensity range (the min and max) using the BigDataViewer menu "Settings - Brightness & Color", which we use as well to adjust the "set view colors" of each of the two volumes (vol4d in red and spheres4d in green).

# Visualization 3: two-channels with the BigDataViewer

from bdv.util import BdvFunctions, Bdv
from net.imglib2.view import Views
from net.imglib2 import FinalInterval

# Variables defined above, in additionto class SpheresData
vol4d = ...
kdtrees = ...
radius = ...
inside = ...
outside = ...


# Open a new BigDataViewer window with the 4D image data
bdv = BdvFunctions.show(vol4d, "vol4d")

# Create a bounded 3D volume view from a KDTree
def asVolume(kdtree, dimensions3d):
  sd = SpheresData(kdtree, radius, inside, outside)
  volume = Views.interval(Views.raster(sd), dimensions3d)
  return volume

# Define a 4D volume as a sequence of 3D volumes, each a bounded view of SpheresData
dims3d = FinalInterval(map(vol4d.dimension, xrange(3)))
volumes = [as3DVolume(kdtrees[ti], dims3d) for ti in sorted(kdtrees.iterkeys())]
spheres4d = Views.stack(volumes)

# Append the sequence of spheres 3d volumes to the open BigDataViewer instance
BdvFunctions.show(spheres4d, "spheres4d", Bdv.options().addTo(bdv))
					

To be continued...