This paper has been selected among the top ten papers and **won the best paper award**, see the list of all finalists as well as other price winners here.

Idiap’s website contains some more information, see for yourself.

Needless to say, this is very exciting and we’re very proud of this achievement!

]]>The first biological sample we’ve imaged was the wing of a fly. This was a nice sample, but we were still looking for live samples with fluorescence markers.

We got in touch with D. Schorderet’s lab, at Institut de Recherche en Ophtalmologie (IRO) in Sion, as they are interested in SPIM imaging capabilities to support their research towards understanding vision problems. Linda Bapst was kind enough to provide us with wild-type 30hpf as well as GFP-stained zebrafishes, in order to do a test run of our microscope.

Here is a video of 3D data containing both fluorescence signal – in green – and transmission – blueish – of the head of a zebrafish. The eyes are nicely visible, notably the the corneas that are in green (GFP-marked) with the optical nerve also in green.

The classical setup to do CS is in the form of an ill-posed (under-determined) inverse problem (for a comprehensive paper, see

with the observations vector and the system’s forward matrix and a bound on the error. Notice that this formulation minimizes an norm on the signal to reconstruct and an norm on the observations.

It is however less common to minimize an error on the data term. Kevin Chan (who kindly let me talk about his paper on this webpage), a former PhD student of Michael at UCSB, published a paper called *“Simultaneous temporal superresolution and denoising for cardiac fluorescence microscopy***“**. In this paper, he solves an inverse problem by minimizing an cost function (both data term and regularization). The presented method works on any quasi-periodic signal. The basic idea behind the method is to sequentially acquire multiple repetitions of a periodic signal and treat each of these repetition as a low-resolution observation of the same high-resolution signal. All observations are then fused together into a high-resolution signal (for more information, check out the paper and some code provided here).

For a demo we had to do, we decided to apply Kevin’s method on videos of a toy rotating plane, using the plugin provided here. The idea was to film the plane as it passes in front of a camera multiple times and use the method to perform temporal super-resolution.

The video hereunder shows the result of the algorithm applied on a video comprising multiple cycles of the toy-plane (typically 8-9 cycles).

*All data shown here is ours and is not free to use, contact me if you’re interested in using it.*

The method was applied in the white square in the middle of the image, hence the two different time scales. The effect of the temporal super-resolution is obvious, the movement of the plane is much smoother, as well as the details that become sharper once the plane is inside the square area.

In this paper, the inverse problem is solved by minimizing the following cost function

(1)

where is the system’s forward matrix, is the high temporal resolution signal of interest and is the observations vector and is a second order Tikhonov matrix used for regularization.

In order to emphasize the effect of minimizing an norm on the data term, we included some outliers in our measurements. To do this, I simply put my hand in front of the plane as it passed in front of the camera. The video hereunder shows all 9 cycles used to give as input to the method. The perturbed cycle is quite obvious, with my hand obfuscating the plane.

I compared minimizing both an and an norms in the equation presented above, the video hereunder shows both results. On the left side of the video, you have the minimization and on the right hand side is the minimization.

The difference is stunning; the minimization clearly lets the hand appears and you can see it flickering, while it is almost impossible to see a perturbation on the minimization.

To give an element of explanation as of why is the norm more robust than to outliers, let us take a statistical point of view.

We perform linear regression assuming that the errors (the residuals of Eq.(1)) follow respectively a Gaussian distribution or a Laplace distribution. This linear regression is exactly similar to minimizing respectively an norm and an norm. A thorough explanation of why this is the case is outside the scope of this blog article, but let us admit that finding the maximum likelihood estimation (MLE) on the model parameters corresponds to minimizing respectively an and an norms (it is the case, a paper mentioning this can be found here).

The plot hereunder shows a Gaussian and a Laplace distributions, both with unit variance, or scale () and zero mean ().

The fact that the Laplace exhibits high tails explains why the outliers do not have such a huge impact on the MLE obtained parameters, as having values off-centered is still probable. For the Gaussian case, it is so unlikely to have a value outside of +/- 6 sigmas that any outlier will have a tremendous influence on the MLE parameters. Hence the higher robustness of over .

There is nothing new research-wise in this post, robust norm was a hot topic in the 60s, for instance with the famous Huber loss, but I think this makes for a beautiful illustration of the robustness of the norm to outliers.

]]>It allows to bind C++ and Python in many ways, relies heavily on meta-programmation (it is headers-only). With it, it’s very simple to pass Numpy arrays from Python to C++ and inversely dynamically create arrays in C++ and pass them to Python. You can install it with python-pip.

Here is a simple example of a C++ function accepting two Numpy arrays and returning the sum of both arrays in a Numpy array of the same shape.

For this dummy example, only 2-dimensionnal arrays are accepted but extending to more is trivial.

The C++ code, in a file called “example.cpp”, goes as follows (you can find all the code and instructions on my github page)

#include <pybind11/pybind11.h> #include <pybind11/numpy.h> namespace py = pybind11; py::array_t<double> add_arrays(py::array_t<double> input1, py::array_t<double> input2) { /* read input arrays buffer_info */ py::buffer_info buf1 = input1.request(), buf2 = input2.request(); if (buf1.size != buf2.size) throw std::runtime_error("Input shapes must match"); /* allocate the output buffer */ py::array_t<double> result = py::array_t<double>(buf1.size); py::buffer_info buf3 = result.request(); double *ptr1 = (double *) buf1.ptr, *ptr2 = (double *) buf2.ptr, *ptr3 = (double *)buf3.ptr; size_t X = buf1.shape[0]; size_t Y = buf1.shape[1]; /* Add both arrays */ for (size_t idx = 0; idx < X; idx++) for (size_t idy = 0; idy < Y; idy++) ptr3[idx*Y + idy] = ptr1[idx*Y+ idy] + ptr2[idx*Y+ idy]; /* Reshape result to have same shape as input */ result.resize({X,Y}); return result; } PYBIND11_MODULE(example, m) { m.doc() = "Add two vectors using pybind11"; // optional module docstring m.def("add_arrays", &add_arrays, "Add two NumPy arrays"); }

To compile it I used

c++ -O3 -Wall -shared -std=c++11 -fPIC -I/usr/include/python2.7 -lpython2.7 `python -m pybind11 --includes` example.cpp -o example`python-config --extension-suffix`

And call it from Python with:

import numpy as np import example a = np.zeros((10,3)) b = np.ones((10,3)) * 3 c = example.add_arrays(a, b) print c

That’s is, there’s no tiring data passing hassle, as I had to do when doing similar stuff manually.

Thanks Remy for spotting the mistakes in the html code removing <double> types.

]]>We are still in the process of building the platform but already acquired nice data with it. For example, have a look at the image hereunder. It shows the wing of a domestic fly with a nicely visible vein (the big black tube) and some wing-hair.

]]>

Welcome to my very first post on my webpage.

I will present my research but also things I find interesting or nice or IT hacks that I found non-trivial.