Expectation of Poisson data contributing to a measurement¶
Assume the configuration of Figure 12: two radioactive sources contribute to a single measurement N. We know the a-priori expected values aˉ and bˉ for each of the sources, and we know the measured count N=a+b. The question is to compute the expected values of a and b given N.
After the measurement, we know that the first detector can only have produced a counts if the other one happened to contribute N−a counts. Dropping some constants this yields:
A bit more work needs to be done for the numerator:
a=0∑Na!aˉa(N−a)!bˉN−aa=a=1∑Na!aˉa(N−a)!bˉN−aasummation can start from 1=a=1∑N(a−1)!aˉa(N−a)!bˉN−a=aˉa=1∑N(a−1)!aˉa−1(N−a)!bˉN−a=aˉa=0∑N−1a!aˉa(N−1−a)!bˉN−1−a=aˉ(N−1)!(aˉ+bˉ)N−1
This section explains why maximizing the likelihood of the complete variables Lx is equivalent to maximizing the likelihood L of the observed variables Q.
First some notations and definitions must be introduced. As in the text, Λ denotes the reconstruction, Q the measured sinogram and X the complete variables. We want to maximize the logarithm of the likelihood given the reconstruction:
where k(X∣Q,Λ) is the conditional likelihood of the complete variables, given the reconstruction and the measurement. These definitions immediately imply that
which means that we write the log-likelihood of the complete variables as a function of Λ′, and that we eliminate the unknown variables X by computing the expectation based on the current reconstruction Λ. Combining (10) and (11) results in
Finally, we define a generalized EM (GEM) algorithm. This is a procedure which computes a new reconstruction from the current one. The procedure will be denoted as M. M is a GEM-algorithm if
This means that we want M to increase h. We can be more demanding and require that M maximizes h; then we have a regular EM algorithm, such as the MLEM algorithm of section Iterative Reconstruction.
Now, from equation (12) we can compute what happens with L if we apply a GEM-step to increase the value of h:
with the additional requirement that
∫k(X∣Q,Λ)dX=∫k(X∣Q,M(Λ))dX=1.
It turns out that (18) is always positive, due to the convexity of tlnt. We will have a look at that now.\
Consider the function ψ(t)=tlnt.
It is only defined for t>0. Here are its derivatives:
Now we have proved that the GEM-step increases L. It still remains to be shown that the algorithm indeed converges towards the maximum likelihood solution. If you want to know really everything about it, you should read the paper by Salvo and Defrise,
“A Convergence Proof of MLEM and MLEM-3 With Fixed Background”, IEEE TMI, 38 (2019).
Consider again the projection of a point source in the center of the field of view. With time-of-flight, the parallel projection along angle θ can be considered as a Gaussian blurring along lines with angle θ. That gives for the TOF-sinogram q(x,y,θ):
where σ represents the uncertainty of the TOF-measurement. The corresponding backprojection operator is obtained by applying the same blurring to the projections, followed by integration over θ. The convolution of a Gaussian with itself is equivalent to multiplying its standard deviation with 2 (see appendix Combining resolution effects: convolution of two Gaussians). One finds