Color Space Preprocessing: Fun with TensorFlow

Do imported libraries interfere with your state-of-the-art pipeline?

The pipeline is fantastic, as are the complementary (and complimentary) TPUs. Model training is an iterative process, but it doesn’t have to be slow. With and a TPU, work that lasted hours on CPU is wrapped up before you can pour a cup of tea. Understanding how not to interfere with that speed matters.

Framing: The Challenge

For an exemplary pipeline, we turn to Kaggle’s starter notebook in its popular contest, Cassava Leaf Disease Classification. Attached to the notebook is a set of TFRecords, TensorFlow’s recommended source for an efficient data stream, especially with big, unstructured data.

The blue sky and puffy clouds are considered a healthy leaf.

Can’t we focus on the leaves automatically, with a smarter crop?

Leaf experts say we can. Researchers at UCLA and around the globe have practice segmenting digital leafy content. According to their studies, the requisite color space is not RGB, but HSV (hue-saturation-value). Across camera types and lighting conditions, a hue threshold can segment plant material consistently.

Ignore non-leafy hues to find the right spot. Now that’s a healthy leaf!

Color mask: OpenCV vs TensorFlow

Our first function relies on cv2 for the color filtering. It starts and ends with a TensorFlow op, tf.cast, to handle tensors in and out. The default values for HSV arguments bracket mostly green hues.

The tensor passes to cv2 as a numpy array.
100% tensor-friendly 👍
HSV Cylinder (wikipedia)
Functional equivalence. The better mask will be determined by execution time and pipeline compatibility.

Custom crop: Loops vs Tensor Arrays

Let’s write the crop function a few ways for comparison. Tensorboard will declare a winner for us in the end. Again, the driving question is this: What kind of functions can we insert without impeding

Schematic of cv_color_mask then loop_crop, target_size=[300,300]. Dotted boxes are for demo, only.
Schematic of tf_color_mask then array_crop, target_size=[224,224]. Red boxes are for demo, only.
Schematic of tf_color_mask then max_rand_crop, target_size=[224,224], saccades=6. Dotted boxes are for demo, only.

Pipeline Integration

Our preprocessing functions are called from decode_image. With 2 mask functions and 3 ways to crop, we have 6 possible combos to clock. And we’d like to compare performance across CPU, GPU and TPU. 😕 And we’d be cheating ourselves if we didn’t try decorating all the candidate functions with @tf.function with and without input signatures to avoid retracing and whatnot. 😵 Let’s see how far we get.

Sidebar to troubleshoot cv_color_mask

There is a hitch. With cv_color_mask in the pipeline, our training is over before it begins. We can troubleshoot this, though. If you are familiar with py_function, jump to the next subsection.

From that irrespective of the context in which [the mapped function] is defined (eager vs. graph), traces the function and executes it as a graph. To use Python code inside of the function you have a few options:1) Rely on AutoGraph to convert Python code into an equivalent graph computation. The downside of this approach is that AutoGraph can convert some but not all Python code.2) Use tf.py_function, which allows you to write arbitrary Python code but will generally result in worse performance than 1).3) Use tf.numpy_function, which also allows you to write arbitrary Python code. Note that tf.py_function accepts tf.Tensor whereas tf.numpy_function accepts numpy arrays and returns only numpy arrays.

Back on track with tf.py_function

One of many improvements to TF2 is a more dexterous py_function. Even though cv_color_mask includes both cv2 and tf-native ops, the improved py_function sorts that out, and it’s compatible with GPU. 👍

To use cv_color_mask in the pipeline, we need the wrapper, py_function.

Every image has its shape

Notice that decode_image includes arguments for height and width. TensorFlow is picky about dimensions when it stages functions for graph mode. Why complicate things with options to vary shape?

Pipeline Comparisons

The training notebook used for our comparisons is here. The classifier to be trained is a bare-bones convolutional neural network. It is built within a strategy scope; of course, get_strategy() returns just the _DefaultDistributionStrategy on Kaggle’s CPU and GPU kernels, both of which have 1 core.

2 line changes → 7 preprocessing routines
  • On GPU, tf_color_mask takes the lead, but cv_color_mask is definitely serviceable; in the upcoming section on Tensorboard, we clarify that tf_color_mask did not accelerate until explicitly placed on GPU.

Beyond comparison: TPU

Flip the switch on Kaggle’s TPU v3-8, and the same notebook used for training on CPU and GPU is practically unchanged. TensorFlow opts for TPUStrategy. With 8 cores available, we increase from BATCH_SIZE=16 to BATCH_SIZE=128. Experts at Kaggle would remind us to adjust the learning rate by a factor of 8, as well, but we aren’t concerned with that parameter today.

  • tf_color_mask, coupled with any of our custom crops, soars on TPU. Just flip the switch for faster training by almost an order of magnitude. And the memory of a TPU could accommodate much bigger images.

Behind the scenes with Tensorboard

Whereas fiddling with the TPU was not required, fiddling with the GPU was. Initially, Kaggle’s GPU accelerated cv_color_mask conditions as expected, but had very little impact on configurations involving tf_color_mask. A Tensorboard callback on was revealing…

Explicit placement on GPU made all the difference, especially for tf_color_mask.
  • The Tensorboard callback on clarified a great deal.
  • Effort and time spent investigating the GPU is another reason to appreciate the easy speed of tf_color_mask on TPU.


Our smart crop could impact training and inference, so portability matters.

tf_color_mask 👆
array_crop 👆
max_rand_crop (saccades=6)


Most TF2 tutorials cover model layers and training loops — the big matmul stuff, by the batch. We focused on preprocessing individual images, instead.

What we learned is encouraging. The pipeline is amenable to your custom code, even when you rely on imported libraries, …to a point.

In order to reap the full benefits of model training on TPU — not to mention deployment — use tensor-friendly ops. Kaggle and Tensorboard can facilitate your trial-and-error.

Epilogue: Find a Place for Color Space in Neural Networks

Proof of principle requires example. Our example happened to hinge on a color space conversion and hue.

We hope it reminded capable coders that the mind-blowing pipeline — TFRecords to TPU — accommodates simple creativity, too.

Did we mention that a $30,000 TPU from Kaggle is yours free 30 hours/week? Excited? Good. Because the title said fun. And fun implies a challenge, right?

A side of hue

In the human brain, some visual processing is color-sensitive, some is not. If our goal is to organize a neural network like primate visual cortex, we might convert RGB tensors to grayscale for one afferent stream and HSV for another; the color images would be processed more slowly, but catch up via skip connections (Chen et al., 2007).

Immunohistochemistry in CIELAB color space

Nowhere is color space exploration more warranted than under the microscope, where the subject is stains.

CIELAB color space (wikipedia)

The pervasive problem with stains

No, it’s not about separating stains on a slide. It’s about reconciling different images of identical stains, even identical slides. Bigger datasets would be great for model training, and researchers are willing to share. But somehow their images look incompatible.

Fluoro-combo-blender, yeah!

Fluorochromes are carefully engineered to be distinguishable by hue. Obvious, right? Let’s see how this defining characteristic could be exploited in not-so-obvious ways. (This area of microscopy is loaded with tools and techniques; for thorough background, look here or here.)

By author. These images were acquired separately at the microscope, but public datasets might include only the color-merge (right). In the pipeline, tf_color_mask could recreate the RFP (left) and GFP (center) images.

References, Resources & Links

  • Custom TFRecords with image height and width as example features; this dataset is also attached to the article’s companion notebook.
  • Ishikawa-Ankerhold HC, Ankerhold R, Drummen GPC. Advanced Fluorescence Microscopy Techniques — FRAP, FLIP, FLAP, FRET and FLIM. Molecules 2012, 17, 4047–4132. → 86 pages of outstanding physics, pics, historical narrative and more.
  • Chudakov DM, Matz MV, Lukyanov S, Lukyanov KA. Fluorescent proteins and their applications in imaging living cells and tissues. Physiol Rev. 2010 Jul;90(3):1103–63. doi: 10.1152/physrev.00038.2009
  • Different whole-slide scanners yield images that are too different.
  • Geread RS, Morreale P, Dony RD, Brouwer E, Wood GA, Androutsos D, Khademi A (2019). IHC Color Histograms for Unsupervised Ki67 Proliferation Index Calculation. Front. Bioeng. Biotechnol. 7:226. doi: 10.3389/fbioe.2019.00226 Images can solve real problems, but RGB is not always the ideal color space.
  • Pontalba JT, Gwynne-Timothy T, David E, Jakate K, Androutsos D, Khademi A (2019). Assessing the Impact of Color Normalization in Convolutional Neural Network-Based Nuclei Segmentation Frameworks. Front. Bioeng. Biotechnol. 7:300. doi: 10.3389/fbioe.2019.00300
  • Otálora S, Atzori M, Andrearczyk V, Khan A and Müller H (2019). Staining Invariant Features for Improving Generalization of Deep Convolutional Neural Networks in Computational Pathology. Front. Bioeng. Biotechnol. 7:198. doi: 10.3389/fbioe.2019.00198
  • Durand, A., Wiesner, T., Gardner, MA. et al. A machine learning approach for online automated optimization of super-resolution optical microscopy. Nat Commun 9, 5247 (2018).
  • Rumelhart, D. E., McClelland, J. L. & the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations (MIT Press, Cambridge, Massachusetts, 1986). → The brain and mind sure work well together.
Please :)
Thank you!

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store