By Marissa Stonefield
Hyperspectral imaging (HSI) technologies are continuously advancing , opening up existing and potential applications in fields as diverse as biomedical imaging, smart agriculture, and machine vision. In fact, many current fields utilizing multispectral imaging will soon migrate towards hyperspectral imaging technologies as they become more readily available. This makes it important to continuously stay up-to-date on leading HSI technologies, to learn how to balance technology growth, and to better understand where HSI technology is headed in the near future.
TruTag Technologies’ autonomous, cloud-connected HSI system is the first to capture and process a full multi-megapixel hyperspectral datacube without the need for external processing. With the capability to convert or add existing product lines with minimal development effort, the TruTag Model 4100 Handheld Hyperspectral Imager claimed the SPIE 2017 Prism Award for Photonics Innovation (imaging and cameras division).
Dr. Hod Finkelstein, Chief Technology Officer at TruTag Technologies and a speaker at the OSA Imaging and Applied Optics Congress in 2017, discussed with Photonics Online his experience in developing TruTag’s hyperspectral imaging solution. Dr. Finkelstein also delved into how his past experiences in the medtech and radio frequency (RF) industries helped inform decisions made in creating new HSI technologies; finally, he offered insight into maintaining balance in the growth of both optical and data processing technologies for HSI.
PHOTONICS ONLINE - Tell us about your experience developing TruTag’s hyperspectral imaging solution. What did you learn? What were the challenges?
Dr. Hod Finkelstein — We followed a progressive de-risking process in selecting the right technology for our hyperspectral camera. This started with a very broad search for existing technologies, with the hope we would not need to re-invent the wheel. Only once we were convinced no other technology exists for delivering a handheld, high-resolution hyperspectral imaging in a self-contained and mass-manufacturable instrument did we start the innovation process.
Once we converged on a tunable Fabry-Perot device, we started by modeling the system with the motto, “if it works in simulation, it may work in reality, but if it fails in a model, it will certainly fail in real life.” The working model allowed us to gain some confidence and invest more capital in building a prototype. Again – we started by using existing components, and only once those proved themselves in terms of basic functionality did we construct a complete prototype.
As far as I am concerned, the development process is never really over. As we gather feedback from field deployments of our cameras, we are continuously improving and upgrading our hardware, software and algorithms, and the user experience.
PO - What are the most important performance trade-offs and data processing considerations within hyperspectral imaging applications?
HF — I’d say that the most important trade-off is between universality of a hyperspectral camera and application-specific performance metrics. For example, certain applications greatly benefit from a built-in light source that provides excellent control and repeatability over illumination conditions, as compared with using ambient or external lighting. However, other applications require non-contact operation with ultrafast acquisition. This typically requires an external high-power source. Trying to accommodate as wide a range of applications as possible is challenging.
In terms of data processing, a similar trade-off exists. The most universal data output from a hyperspectral imager is a raw hyperspectral datacube, which contains all the spatial and spectral information that was acquired. However, these data sets are huge — around 1 GB each — and even just moving them around to external devices in a timely manner is difficult. We solved this problem by integrating efficient embedded processing algorithms in the camera. These enable us to identify the useful information even during the acquisition of the datacube, and thus greatly reduced processing, storage and data communication requirements.
This clearly works best in sparser data sets. However, the trade-off here is that, if you are looking for the spectral signature of a mineral in a rock specimen, your data reduction algorithm will be different than if you are looking for mold in an onion. Thus, a certain level of customization is required for each application area.
PO - What strategies should be implemented for data reduction when using hyperspectral imaging cameras? What are the steps involved in the execution of these strategies?
HF — That really depends on the application. The more a priori information you have about the data you are looking for, the higher the achievable compression ratio. In certain cases, compression can be implemented right at the sensing mode. For example, if you are looking for spatially-sparse objects or features with known spectral features, it is possible to identify regions of interest containing these objects during acquisition of the datacube, and to only process these regions. Typically, this is the most computationally-efficient compression method, achieving the highest compression ratios.
If spectra mostly changes slowly, it is possible to use traditional compression techniques and only transmit differences between frames. Many of the schemes used in 2D image compression can be utilized with datacubes, and can be applied in the wavelength axis, as well.
Finally, when no a priori information is available regarding the datcube, principal component analysis (PCA) can be used to identify the optimal eigenvectors to encode the information, as well as to categorize regions in the datacube. In this case, a single image can capture the relevant information from the whole datcube.
PO — What are the best techniques for maintaining a balance in the growth of both optical and data processing technologies for hyperspectral imaging solutions?
HF — This again depends on the application. For applications that are less cost-sensitive, such as scientific imaging, the brute force approach of using stronger and faster computers with larger embedded memory should be considered. For application-specific mobile applications, heavy use should be made of advanced algorithms to reduce data at the source, utilizing lossy compression. And, for applications in between, a balance between these approaches should be struck, depending on available information on the data that is of interest.
PO – How has your previous experience in the medtech and RF industries helped to inform decisions made in creating TruTag hyperspectral imaging technology?
HF — If we start with the RF world takeaways, one of the big dilemmas in the RF chip world is whether to go with RF CMOS technologies, or to opt for materials such as SiGe or GaAs. The latter typically offer superior performance, but the former are more manufacturable, typically lower-cost, and are available through more foundries.
I am a big proponent of utilizing generic technologies, even if they offer inferior performance, because through integration and access to larger technology trends, it is possible to compensate for these deficiencies in the longer term. While in HySi, no “generic” technology currently exists, we selected the tunable Fabry-Perot interferometer (FPI) technology because we concluded it is the most mass-production-ready, and indeed we ported this technology into a high-volume production line in a more-or-less generic factory.
In terms of the medtech takeaways — and specifically from the DNA sequencing world where I previously worked — one [key] is working sequentially while keeping a clear vision of the future. What I mean is that, with tough technical problems, it is important to first get things right while establishing a path towards cost-effectiveness, high yields, etc. But the initial focus should be performance and time-to-market. We are now in the cost reduction phase and, of course, this would not be possible without having thought about this from day one. But there is a time for development and a time for cost and yield optimizations.
PO – How does the TruTag hyperspectral imaging technology improve existing multispectral imagers and expand the potential applications for high-resolution hyperspectral imaging?
HF — The premise of the TruTag hyperspectral camera is fundamentally different than that of the available multispectral imagers, and it is this premise that offers a completely different value proposition. Multispectral imagers hard-wire spectral selectivity to an imaging array. Whether by using an expanded color-filter array (CFA) or by applying an array of fixed FPIs, all these imagers create an integrated color-selective imaging array. This has some benefits in terms of size and image acquisition time, but it comes at a hefty price.
The TruTag technology decouples the spectrally-selective element from the image sensor by using a separate tunable FPI. This results in a number of important benefits. First, we can utilize the whole array to image all wavelengths. Existing multispectral devices lose spatial resolution inversely with the number of spectral channels. Second, we can allow the user to select which spectral channels to acquire. Multispectral devices have these hard-wired. For us, we can direct the FPI to only scan certain gaps, covering the wavelengths of interest. If you are flying over a maize field, we can scan the wavelengths relevant for the health of the maize plant. If you then fly over a wheat field, you don’t need to change your camera or record and transmit a lot of irrelevant data.
Third, we can leverage on the standard image-sensor technology roadmap. If one application needs a low-cost VGA sensor, just replace the sensor – no need to change the more complex FPI or data processing hardware. If you need 50 MPixel resolution per image, that also does not require production of a new chip or manufacturing flow. We can work with area sensor or linear sensors. Finally, there is a limit to the number of polymer dye materials that can be used, and as users of multispectral cameras know, achieving a repeatable robust spectral response across devices and time is an as-yet unsolved problem. Our device offers hundreds of bands, and can be extended to the IR without needing to undergo expensive, lengthy and risky development cycles.
PO – What are the most common misconceptions regarding hyperspectral imaging technology? Have there been any applications that have surprised you?
HF — The interesting thing for me is how flabbergasted people typically are when they first see how simple hyperspectral imaging is. The theory behind this technology is quite complex, but when you pick up the camera, press the trigger and get either a hyperspectral video (a sequence of frames scanning the spectrum) or a processed hyperspectral image with the regions of interest highlighted, this makes this technology a reality.
For us, it is critical to understand that the output of a hyperspectral camera is not the datacube. A datacube contains 1 GB of raw data, which is sparse in actionable information. The ability to find and identify this information in such a small device, that does not require external computers, is the main novelty that shatters the misconception of the complex nature of this imaging modality. In terms of applications, really, for me, the breadth of applications has been eye opening: from medical imaging and food safety to electronic component inspection and anti-counterfeiting. Many applications where HySi was off-limits due to cost now are entering its sphere of influence.
PO – Where do you see the industry of hyperspectral imaging heading? What are some potential future trends and applications?
HF — Clearly, the current fields utilizing multispectral imaging will migrate towards hyperspectral technologies as these latter technologies become more widely available. This is for the simple reason that HySi offers better spectral and spatial resolutions for the same or lower price points.
Although I cannot read the future, I can think of a number of new markets that may evolve in response to the availability of HySi in smaller form factors, at lower costs and with embedded processing. Telemedicine is one – the ability to deliver to experts high resolution images with absolute spectral information will enable remote diagnoses. In digital pathology, most current systems can only track a small number of stains. Integrating the ability to image tens of different targets simultaneously, quickly, and in high resolution may enable pathologists and microbiologists to better identify cellular processes with a higher level of reliability.