Professor Rodrigo Fernandez-Gonzalez (pictured middle) at the University of Toronto’s Institute of Biomedical Engineering (BME) have developed a new artificial intelligence method that significantly improves how scientists analyze time-lapse microscopy images. The research was led by MASc student Raymond Hawkins (pictured right). Their findings were published in the latest issue of the Journal of Cell Biology.
Researchers at the University of Toronto’s Institute of Biomedical Engineering (BME) have developed a new artificial intelligence method that significantly improves how scientists analyze time-lapse microscopy images. Their findings were published in the latest issue of the Journal of Cell Biology.
The tool, called ReSCU-Net, reduces the need for manual correction and outperforms existing approaches, including the widely used Segment Anything model developed by Meta AI. ReSCU-Nets have the potential to streamline research in cell and developmental biology where image analysis is essential for tracking how cells behave over time.
Multidimensional microscopy images, which include temporal or depth information, are commonly used to study how tissues develop or respond to damage. However, analyzing these images is often slow and prone to error. ReSCU-Nets address this challenge by offering high segmentation accuracy while requiring relatively little training data.
“Many researchers working with microscopy images face a trade-off between accuracy and practicality,” said Professor Rodrigo Fernandez-Gonzalez, corresponding author and professor at BME. “Machine learning techniques tend to be data hungry, but the average microscopist does not have access to hundreds or thousands of annotated images. We developed ReSCU-Nets to work well with limited data while still accounting for how biological structures change over time.”
To build and test ReSCU-Nets, the research team used time-lapse image sequences from developing fruit fly embryos. These included datasets showing how heart cells move during development and how skin cells respond to injury. ReSCU-Nets take advantage of the repetitive nature of multidimensional image sequences where the same objects such as cells or organelles appear across many frames. The model uses the segmentation from one frame as a prompt to guide segmentation in the next, a technique known as recurrent prompting.
The researchers compared ReSCU-Nets to several existing models and found that it consistently provided the highest segmentation accuracy with the fewest errors across all datasets.
“The next step is to really go multidimensional,” said Fernandez-Gonzalez. “We currently apply ReSCU-Nets to three-dimensional sequences, with the third dimension being time or depth, but we are excited to extend the model to four or five dimensions. We will need bigger GPUs though.”
ReSCU-Nets are integrated into PyJAMAS, the open-source image analysis platform developed by the Fernandez-Gonzalez lab (pyjamas.readthedocs.io).