On a mathematical theory of coded exposure
 Yohann Tendero^{1}Email author and
 Stanley Osher^{2}
https://doi.org/10.1186/s4068701500518
© Tendero and Osher. 2016
Received: 11 July 2015
Accepted: 16 December 2015
Published: 20 March 2016
Abstract
This paper proposes a mathematical model and formalism to study coded exposure (flutter shutter) cameras. The model includes the Poisson photon (shot) noise as well as any additive (readout) noise of finite variance. This is an improvement compared to our previous work that only considered the Poisson noise. Closed formulae for the mean square error and signal to noise ratio of the coded exposure method are given. These formulae take into account for the whole imaging chain, i.e., the Poisson photon (shot) noise, any additive (readout) noise of finite variance as well as the deconvolution and are valid for any exposure code. Our formalism allows us to provide a curve that gives an absolute upper bound for the gain of any coded exposure camera in function of the temporal sampling of the code. The gain is to be understood in terms of mean square error (or equivalently in terms of signal to noise ratio), with respect to a snapshot (a standard camera).
Keywords
Mathematics Subject Classification
1 Background
Since the seminal papers [1–6] of Agrawal and Raskar coded exposure (flutter shutter) method has received a lot of followups [7–39]. In a nutshell, the authors proposed to open and close the camera shutter, according to a sequence called “code,” during the exposure time. By this clever exposure technique, the coded exposure method permits one to arbitrarily increase the exposure time when photographing (flat) scenes moving at a constant velocity. Note that with a coded exposure method only one picture is stored/transmitted. A rich body of empirical results suggest that the coded exposure method allows for a gain in terms of Mean Square Error (MSE) or Signal to Noise Ratio (SNR) compared to a classic camera, i.e., a snapshot. Therefore, the coded exposure method seems to be a magic tool that should equip all cameras.
We now briefly expose the different applications, variants, and studies that surround the coded exposure method. An application of the coded exposure method to bar codes is given in [16, 35], to fluorescent cell image in [27], to periodic events in [25, 34, 36], to multispectral imaging in [10], and to iris in [21]. Application to motion estimation/deblurring are presented in [9, 19, 20, 26, 31, 33, 37, 38]. An extension for spacedependent blur is investigated in [28]. Methods to find better or optimal sequences as investigated in [12–14, 22, 23, 39] or in [15] that aims at adapting the sequence to the velocity. Diverse implementations of the method are presented in [17, 18, 24, 32]. The method is used for spatial/temporal tradeoff in [7, 8, 11]. A numerical and mathematical investigation of the gain of the method is in [29, 30] but their camera model contains only photon (shot) noise and neglects all other noise sources, contrarily to the model we shall develop in this paper.
Therefore, as far as we know, little is known on the coded exposure method from a rigorous mathematical point of view and it seems useful for the applications to build a theory to shed some light on this promising coded exposure method. For instance, to the best of our knowledge, little is know on the gain, in terms of MSE and SNR, of this coded exposure method compared to a standard (snapshot) camera. This paper proposes a mathematical model of photon acquisition by a light sensor. The model can cope with any additive readout noise of finite variance in addition to the Poisson photon (shot) noise. The model is compatible with the Shannon–Whittaker framework, assumes that the relative camera scene velocity is constant and known, that the sensor does not saturate, that the readout noise has finite variance and that the coded exposure method allows for an invertible transformation among the class of bandlimited functions (this means that the observed image can be deblurred using a filter). Note that with this model the image has a structure: the image is assumed to be band limited. This set of assumptions represents an ideal mathematical framework that allows us to give a rigorous analysis of the limits, in terms of MSE and SNR, of the coded exposure method. For instance, it is clear that the MSE (resp. SNR) will increase (resp. decrease) if one needs to estimate the velocity from the observed data, compared to the formulae we shall prove in this theoretical paper.
To be thorough, a mathematical analysis of a camera requires to go rigorously from the continuous observed scene to the discrete samples of the final restored image. This is needed to mathematically analyze the whole image chain: from the photon emission to the final restored image via the observed discrete samples measured by the camera. As far as we know, the coded exposure method is very useful for moving scenes. Consequently, we need a formalism capable of dealing with moving scenes. Since the observed scene moves continuously with respect to the time we adopt a continuous point of view. This means that we shall model the observed scene as a function s. Loosely speaking, s(x) give the light intensity at a spatial position x (by opposition, a discrete formalism would model the observed scene as a vector of \(\mathbb {R}^n\) but requires a more restrictive assumption, see below). We shall rely on the Shannon–Whittaker framework (see, e.g., [40]) to perform the mathematical analysis of samplingrelated questions. This framework requires the structure of bandlimited (with a cutoff frequency) signals or images and will allow us to perform a rigorous mathematical analysis of the coded exposure method. Recall that a discrete formalism would model the observed scene as a vector of \(\mathbb {R}^n\) and the convolution would use Toepliz matrices. Therefore, the scene would be assumed to be periodic and also band limited for sampling purposes. Note that the continuous formalism that we shall develop in this paper does not require to assume that the observed scene s is periodic (most natural scenes are not periodic). However, the adaptation of the formalism that we shall develop in this paper to periodic bandlimited scene is straightforward if needed for some application.
Our first goal is to provide closed mathematical formulae that give the MSE and SNR of images obtained by a coded exposure camera. Therefore, we shall start by carefully modeling the photon acquisition by a light sensor then deduce a mathematical model of the coded exposure method.

Assume the Shannon–Whittaker framework that (1) requires bandlimited (with a frequency cutoff) images, and that (2) the pixel size is designed according to the Shannon–Whittaker theory. In this paper, we prove the validity of the Shannon–Whittaker for nonstationary noises (see also Sect. 2.2).

Assume that the relative camera scene velocity is constant and known.

Assume that the sensor does not saturate.

Assume that the additive (sensor readout) noise has zero mean and finite variance (this term contains, without loss of generality, the quantization noise).

Assume that the coded exposure allows for an invertible transformation among the class of bandlimited functions (this means that the observed image can be deblurred using a filter).

Neglects the boundaries effects for the deconvolution (the inverse filter of a coded exposure camera has larger support than the inverse filter of a snapshot. Thus, this slightly overestimates the gain of the coded exposure method with respect to the snapshot).
The paper is organized as follows. Section 2 gives a mathematical model of classic cameras. This mathematical model is extended in Sect. 3 to model coded exposure cameras. Section 4 gives an upper bound for the gain of the coded exposure method, in terms of MSE and SNR with respect to a snapshot, in function of the temporal sampling of the code. The upper bound of Corollary 4.2 is illustrated on Fig. 2. In addition, Table 1 provides numerical experiments illustrating these results. The Appendix A–L contain several proofs of propositions that are used throughout this paper. A glossary of notations is in Appendix M (in the sequel latin numerals refer to the glossary of notations).
2 A mathematical model of classic cameras
The goal of this section is to provide a mathematical model of the photon acquisition by a light sensor and the formalism that we shall use to model the coded exposure method in the sequel.
As usual in the coded exposure literature [2, 3, 9, 21, 22, 30, 35, 46] and for the sake of the clarity we shall formalize the coded exposure method using a onedimensional framework. In other words, the sensor array and the observed image are assumed be onedimensional. One could think that this onedimensional framework is a limitation of the theory. However, this onedimensional framework is no limitation. Indeed, as we have seen, we assume that the image acquisition obeys the Shannon–Whittaker sampling theory. This means that the frequency cutoff is compatible with the image grid sampling. The extension to any twodimensional grid (and twodimensional images) is straightforward (the sketch of the proof is in Appendix A). Therefore, the onedimensional framework that we shall consider is no limitation for the scope of this paper that proposes a mathematical analysis of coded exposure cameras. A fortiori, the calculations of MSE and SNR that we shall propose in this paper remain valid for twodimensional images. The noise is, in general, nonstationary. This due both to the sensor (see, e.g., [47]) and to the observed scene. In this paper, we also prove the validity of the Shannon–Whittaker interpolation is valid for nonstationary noises (see also Sect. 2.2). In addition, we shall assume that the motion blur kernel is known, i.e., the relative camera scene velocity vector and the exposure code (or function) are known (this kernel is called “PSF motion” in, e.g., [1] and is also assumed to be known [1, p. 2]).
We now turn to the mathematical model of photons acquisition by a light sensor.
2.1 A mathematical model of photons acquisition by a light sensor
The goal of this subsection is to give a rigorous mathematical definition (see Definition 2) of the samples produced by a pixel sensor that observes a moving scene. This definition of the observed sample can cope with any additive zero mean (sensor readout) noise of finite variance in addition to the standard Poisson photon (shot) noise. Note that the model developed in [30] do not consider any additive (sensor readout) noise. Therefore, the results of [30] do not include this more elaborated mathematical model. In particular, the advantages of the coded exposure method in terms of MSE, with this more elaborated set up, are, to the best of our knowledge, open questions.
We consider a continuous formalism in order to ease the transition from steady scenes to scenes moving at an arbitrary real velocity. Another advantage of this continuous formalism is that it allows us to avoid the implicit periodic assumption of the observed scene needed if one uses Toeplitz matrices to represent the convolutions see, e.g., [1, Eq. 2, p. 3] (this is needed because, in general, natural scenes are not periodic).
Remark
We implicitly assume a 100 % fill factor for the sensor as the pixel sensor is supported on \([\frac{1}{2},\frac{1}{2}]\) and we have a pixel sensor at every unit. This is no loss of generality for studying the gain of the flutter with respect to a snapshot. Indeed, the fill factor impacts equally the snapshot and the flutter. In addition, the RMSE calculations are carried out using the function u in (3) as reference and using an unbiased estimator for u. Thus, all results we give in this paper hold if one replace u by \(u=\mathbb {1}_{[\varepsilon ,\varepsilon ]}*g*s\) in (3) for any \(\varepsilon \in (0,\frac{1}{2}]\).
Consider the deterministic function formally defined by \(u:=\mathbb {1}_{[\frac{1}{2},\frac{1}{2}]}*g*s\). The deterministic quantity u(x) represents the gray level of the image at position x if there were no noise and no motion. Indeed, u contains the kernels of the optical system \(g\) and of the sensor. Note that the quantity u(x) can also be seen as an intensity of light emission received by a unit pixel sensor centered at x. With the formalism of, e.g., [1, Eq. 1, Sect. 2] \(\mathbb {1}_{[\frac{1}{2},\frac{1}{2}]}\) represents “\(h_{\text {sensor}}\)” and \(g\) represents “\(h_{\text {lens}}\).”
With the formalism we introduced we can compute the SNR of the produced image, just to verify that we retrieved the fundamental theorem of photography that underlies statements like “the capture SNR increases proportional to the square root of the exposure time” that can be found in, e.g., [2, p. 1, column 2, 1st paragraph]. To this aim, consider the case where \(v=0\), \(t_1=0\), \(t_2=\Delta t\) in (7). If the observed value \(\mathrm{obs}(x)\) at position \(x \in \mathbb {R}\) follows \(\mathcal {P}\left( \int _0^{\Delta t} u(x)\mathrm{d}t \right) \) we have \(\mathbb {E}(\mathrm{obs}(x))=\Delta t u(x)\). This means that, in expectation, the number of photon caught by the pixel sensor centered at x increases linearly with the exposure time. If we timenormalize the obtained quantity and consider, formally, a random variable \(\mathbb {u}_\mathrm{est}(x)\) that follows \(\frac{\mathcal {P}\left( \int _0^{\Delta t} u(x)\mathrm{d}t \right) }{\Delta t}\) we obtain \(\mathbb {E}\left( \mathbb {u}_\mathrm{est}(x) \right) =u(x)\). This means that \(\mathbb {u}_\mathrm{est}(x)\) estimates u(x) without bias. In addition, we have \(\mathrm {var}\left( \mathbb {u}_\mathrm{est}(x) \right) =\frac{\Delta t u(x)}{ \Delta t^2}=\frac{u(x)}{\Delta t}.\)
We now give a mathematical framework to make precise the above formulae. We shall assume that the scene \(s\in L^1_{loc}(\mathbb {R})\) so that the convolution in (3) is well defined. We shall assume that the PSF \(g\) belongs to the Schwartz class that, hereinafter, we shall denote \(S(\mathbb {R})\). In addition, we shall assume that the PSF \(g\in S(\mathbb {R})\) furnishes a cutoff frequency. This assumption is needed by the Shannon–Whittaker sampling theory. We shall assume that the frequency cutoff of \(g\) is \(\pi \), i.e., \(g\) is \([\pi ,\pi ]\) band limited. In other words, \(\hat{g}(\xi )=0\) for any \(\xi \in \mathbb {R}\) such that \(\xi >\pi \), where, here and in the sequel, we denote by \(\hat{g}\) or \(\mathcal {F}(g)\) the Fourier transform of \(g\) [see (14) for the definition of the Fourier transform] and (here and elsewhere) \(\xi \in \mathbb {R}\) represents the (Fourier) frequency coordinate. One could think that this \([\pi ,\pi ]\) is a limitation for the theory. However, it is not. The choice of \([{\pi ,\pi }]\) in the following definition is thoroughly justified in Appendix B.
Definition 1
(The observable scene u) We call observable scene any nonnegative deterministic function u of the form \( u=\mathbb {1}_{[\frac{1}{2},\frac{1}{2}]} *g*s.\) Recall that the \(\mathbb {1}_{[\frac{1}{2},\frac{1}{2}]}\) denotes the characteristic function of the interval \([\frac{1}{2},\frac{1}{2}]\) and is related to the normalized pixel sensor. The PSF satisfies \(g\in S(\mathbb {R})\), \(g\ge 0\), and is \([\pi ,\pi ]\) band limited. The (nonnegative) photon emission intensity is denoted \(s\in L^1_{loc}(\mathbb {R})\). We have that \(u\in L_{loc}^1(\mathbb {R})\) and we assume that u satisfies \(\mu :=\lim _{TR\rightarrow +\infty }\frac{1}{2R}\int _{R}^{R^{}} u(x)\mathrm{d}x \in \mathbb {R}^+\). In addition, we assume that \(\tilde{u}:=u\mu \in L^1(\mathbb {R}) \cap L^2(\mathbb {R})\).
Note that u is the sum of the constant \(\mu \) and of \(\tilde{u} \in L^1(\mathbb {R})\). Thus, we have \(u\in S'(\mathbb {R})\) (the space of tempered distributions). This means that u enjoys a Fourier transform in \(S'(\mathbb {R})\), see, e.g., [50, p. 173], see also [51, p. 23]. In addition, u and \(\tilde{u}\) inherit the frequency cutoff of the PSF \(g\). Therefore, u and \(\tilde{u}\) are \([\pi ,\pi ]\) band limited. In addition note that the assumption \(\tilde{u} \in L^2(\mathbb {R}) \) is w.l.o.g. Indeed, since \(\tilde{u} \in L^1(\mathbb {R})\), from Riemann–Lebesgue theorem (see e.g., [52, Proposition 2.1]), we have that \(\hat{\tilde{u}}\) is continuous. In addition, since \(\tilde{u}\) is \([\pi ,\pi ]\) band limited we have that \(\hat{\tilde{u}}\) is continuous and has compact support. We deduce that \(\hat{\tilde{u}}\in L^2(\mathbb {R})\). Therefore, we obtain that \(\tilde{u} \in L^2(\mathbb {R})\) w.l.o.g.
We can now give a definition of the observed sample at a pixel centered at \(x \in \mathbb {R}\) that we shall denote \(\mathrm{obs}(x)\).
Definition 2
In the sequel we will need to compute MSEs as well as SNRs. Therefore, we will need to compute expected values and variances of the observed samples. Thus, we need to justify the validity of these operations. This is done in Appendix C.
The Definition 2 entails that \(\mathrm{obs}(x)\), the observed sample of a pixel sensor centered at position x, is a measurable function (a random variable see, e.g., [p. 168] shiriaev1984probability) for which it is mathematically possible to compute, e.g., the expectation and the variance.
The images produced by a digital camera are discrete. In addition, the image obtained by a coded exposure camera requires to undergo a deconvolution to get the final crisp image. The calculation of the adequate deconvolution filter requires a continuous model. Thus, we now turn to the sampling and interpolation in order to go comfortably from the discrete observations to the latent continuous image.
2.2 Sampling and interpolation
Therefore, in the sequel, we assume that the observed samples are obtained from a sensor array and that the sensor array is designed according to the Shannon–Whittaker sampling theory. Thus, we assume that the samples \(\mathrm{obs}(x)\) are obtained at a unit rate, i.e., for \(x \in \mathbb {Z}\). Consequently, we shall denote the observed samples by \(\mathrm{obs}(n)\). This means that, in the sequel, we shall neglect the boundaries effect due to the deconvolution. This is another way to get rid of the boundaries effects without assuming that the observed scene is periodic as required by linear algebra (with Toepliz matrices) model based (see, e.g., [1–3, 5, 12, 25, 29, 30]) (this is needed because most natural scenes are not periodic). Note that this slightly overestimates the gain of the coded exposure method with respect to the snapshot. Indeed, the support of the coded exposure function is larger than the support of the exposure function of a snapshot. This means that in practice the boundaries artifacts due to the deconvolution are stronger with the coded exposure method.
Hereinafter, we assume that the sequence of random variables \((\eta (n))_{n \in \mathbb {Z}}\) are mutually independent, identically distributed, and independent from the shot noise, i.e., independent from \(\mathcal {P}\left( \int _{t_1}^{t_2} u(nvt) \mathrm{d}t\right) \) where \(n \in \mathbb {Z}\). This independence assumption represents no limitation for the model. Indeed, a photon can only be sensed once. In addition, the additive (sensor readout) noise comes from an inaccurate reading of the sample value that does not depend on the light intensity emission or on the Poisson photon (shot) noise.
We have defined the observed samples produced by a light sensor in Definition 2. This definition includes both the Poisson photon (shot) noise and an additive (readout) noise of finite variance. We now turn to the mathematical formalization of the coded exposure method.
3 A mathematical model of coded exposure camera that includes any additive (sensor readout) noise of finite variance in addition to the Poisson photon (shot) noise
The goal of this section is to formalize the coded exposure method. In this section, we consider invertible “exposure codes” and provides the MSE and SNR of these exposure strategies. The study yields to Theorem 3.4.
The coded exposure (flutter shutter) method permits to modulate, with respect to the time, the photons flux caught by the sensor array. Indeed, the Agrawal and Raskar coded exposure method [1–6] consists in opening/closing the camera shutter on subintervals of the exposure time. In such a situation the exposure function that controls when the shutter is open or closed is binary and piecewise constant. Since it is piecewise constant it is possible to encode this function using an “exposure code.” (We give a mathematical definition of these objects).
Note that neither the model nor the results of [30] can be used in this paper. Indeed, in [30] the additive (sensor readout) noise is neglected. Therefore, the formalism of [30] does not hold with the more elaborated set up that we shall consider here. Indeed, this paper considers any additive sensor readout noise of finite variance in addition to the Poisson photon (shot) noise.
As we have seen, in their seminal work [1–6], Agrawal and Raskar propose to use binary exposure codes. Yet, mathematically, one could envisage smoother exposure codes that are not binary. Indeed, with a bigger searching space for the exposure code the MSE and SNR can be expected to be better than with the smaller set of binary codes. Therefore, in the sequel, we shall assume that the exposure codes have values in [0, 1]. The value 0 means that the shutter is closed while the value 1 means the shutter is open and, e.g., \(\frac{1}{2}\) means that half of the photons are allowed to reach the sensor. We do not consider the practical feasibility of these nonbinary exposure codes as this is out of the scope of this paper which proposes a mathematical framework and formulae.
We first formalize the fact that the exposure code method modulates temporally the flux of photons that are allowed to reach the sensor by giving an adequate definition of an “exposure function” that, hereinafter, we shall denote \(\alpha \). To be precise, the gain \(\alpha (t)\) at time t is defined as the proportion of photons that are allowed to travel to the sensor. We then give the formula of the observed samples taking the exposure function into account (see Definition 4).
Definition 3
Remark
It is easy to see that \(\alpha \in L^1(\mathbb {R}) \cap L^2(\mathbb {R}) \cap L^\infty (\mathbb {R})\) and that the above definition can cope with finitely supported codes, e.g., the Agrawal and Raskar code [5, p. 5] and patent application [6, p. 5].
We have defined the exposure function that controls with respect to the time the camera shutter. We now give the formula of the observed samples of a coded exposure camera.
Definition 4
Recall that the random variables \(\mathrm{obs}(n)\) observed for \(n \in \mathbb {Z}\) are mutually independent. From Definition 1 we have that u is of the form \(L^1(\mathbb {R})\) plus constant and is band limited. From Definition 3 we have that \(\alpha \in L^1(\mathbb {R})\). We obtain that the convolution in (13) is well defined everywhere. In addition, note that the pixels are read only once as in, e.g., [1–6]. (Only one image is observed, stored and transmitted).
Proposition 3.1
Proof
The proof is a direct consequence of Definition 4. \(\square \)
Remark
The observed samples of any coded exposure camera are formalized in Definition 4. We wish to compute the MSE and SNR of a deconvolved crisp image with respect to the continuous observable scene u. To this aim a continuous deconvolved crisp signal \(\mathbb {u}_\mathrm{est}\) must be defined from the observed samples \(\mathrm{obs}(n)\) observed for \(n \in \mathbb {Z}\). Thus, we (1) prove the mathematical feasibility of the Shannon–Whittaker interpolation “\(\mathrm{obs}(x)=\sum _{n \in \mathbb {Z}} \mathrm{obs}(n) \ \mathrm {sinc}\ (xn)\)”, (2) deduce the conditions on the exposure function \(\alpha \) for the existence of an inverse filter \(\gamma \) that deconvolves the observed samples, (3) define the final crisp image \(\mathbb {u}_\mathrm{est}\) and (4) give the formulae for the MSE and SNR of \(\mathbb {u}_\mathrm{est}\). The study yields to Theorem 3.4.
The mathematical feasibility of the Shannon–Whittaker interpolation is formalized by
Proposition 3.2
Proof
See Appendix D. \(\square \)
We now treat the step 2. We cannot recur to a Wiener filter to define \(\gamma \). Indeed, due to the Poisson photon (shot) noise, the noise in of our observations \(\mathrm{obs}(n)\) defined in Definition 4 is not additive. Therefore, the Wiener filter is not defined (see, e.g., [56, p. 205], [55, p. 95], [57, p. 159] see also [58, p. 252] for a definition). Instead of using a Wiener filter we shall propose a filter designed so that the restored crisp image \(\mathbb {u}_\mathrm{est}\) is unbiased. This is also the set up considered in, e.g., [1, Sect. 3.1, p. 6]. We now provide the condition under which an inverse filter \(\gamma \) will yield to an unbiased restored crisp image \(\mathbb {u}_\mathrm{est}\).
If the exposure function \(\alpha \) defined in Definition 3 satisfies \(\hat{\alpha }(\xi v)= 0\) for some \(\xi \in [\pi ,\pi ]\) the convolution in (18) is not invertible and some information is destroyed. Therefore, it is no more possible to retrieve any observed scene u (in a discrete setting, that would mean that the Toepliz matrix associated with the convolution kernel is not invertible). Thus, if \(\hat{\alpha }(\xi v)\ne 0\) for some \(\xi \in [\pi ,\pi ]\) there exists no inverse filter \(\gamma \) capable of giving back an arbitrary observed scene u. Hence, we assume that the exposure function \(\alpha \) satisfies \(\hat{\alpha }(\xi v)\ne 0\) for any \(\xi \in [\pi ,\pi ]\). Under that condition the convolution \(\left( \frac{1}{v} \alpha \left( \frac{\cdot }{v}\right) \right) *u\) in (18) is invertible because u is \([\pi ,\pi ]\) band limited. Therefore, we have
Definition 5
Remark
From Definition 5, we deduce that \(\mathbb {R}\ni \xi \mapsto \hat{\gamma }(\xi )\) is bounded and has compact support. Hence, we have \(\hat{\gamma }\in L^1(\mathbb {R}) \cap L^2(\mathbb {R})\) and therefore \(\gamma \in L^2(\mathbb {R})\). In addition, from Riemann–Lebesgue theorem we have that \(\gamma \) is continuous and bounded. Consequently, \(\gamma \) is \([\pi ,\pi ]\) band limited and \(C^{\infty }(\mathbb {R})\), bounded, and belongs to \( L^2(\mathbb {R})\).
We now treat the step 3. The mathematical feasibility of deconvolved crisp signal \(\mathbb {u}_\mathrm{est}\) is formalized by
Proposition 3.3
This proposition means that \(\mathbb {u}_\mathrm{est}\) is an unbiased estimator of the observable scene u.
Proof
See Appendix F. \(\square \)
We now treat the step 4. We have
Theorem 3.4
(MSE and SNR of the coded exposure method) Let \(\mathbb {u}_\mathrm{est}\) be as in Proposition 3.3. Consider a scene \(u(xvt)\) (see Definition 1) that moves at velocity \(v\in \mathbb {R}\) and let \(\sigma _r^2\) be the (finite) variance of the additive (readout) noise.
Proof
See Appendix I. \(\square \)
We now connect the formulae in Theorem 3.4 with the existing literature on the coded exposure method. We have that the mean photon emission \(\mu \) relates to \(\bar{i}_0\) in, e.g., [2, Sect. 2]. In addition, from (25), we have that for fixed exposure function \(\alpha \) and additive (readout) noise variance \(\sigma _r^2\) the SNR evolves proportionally to \(\sqrt{\frac{\mu }{1 +\frac{\sigma _r^2}{\mu }}}\) with the mean photon emission \(\mu \). In particular, from (25), if \(\sigma _r^2=0\) and for a fixed \(\alpha \) we deduce that the SNR evolves proportionally to \(\sqrt{\mu }\) and we retrieve the fundamental theorem of photography. Note that it is equivalent to minimize (24) or to maximize (25) with respect to the exposure function \(\alpha \). Therefore, in the sequel we choose w.l.o.g. to use formula (24) and to evaluate the performance of the coded exposure method in terms of MSE. The calculation for the SNR can be immediately deduced. As an easy application of Theorem 3.4, we have the following corollary that provides the MSE of any invertible snapshot, i.e., that satisfies \(v\Delta t<2\) (see the Sect. 5) where \(\Delta t\) is the exposure time. This corollary will also be needed to compare the coded exposure method and the snapshot, in terms of MSE, in Sect. 4.
Corollary 3.5
We now turn to Sect. 4 that proposes a theoretical evaluation of the gain of the coded exposure method, with respect a snapshot.
4 An upper bound of performance for coded exposure cameras
This section study the gain, in terms of MSE, of the coded exposure method, with respect to a snapshot, as a function of the exposure code sampling rate. The study yields to a theoretical bound that is formalized in Theorem 4.1 and Corollary 4.2. The bound is valid for any exposure code provided \(v\Delta t \le 1\) (we recall that the exposure code sampling rate \(\Delta t\) is defined in Definition 3). This means that the proposed bound is an upper bound for the gain of any coded exposure camera, provided \(v\Delta t \le 1\). We have
Theorem 4.1
Proof
See Appendix J. \(\square \)
Corollary 4.2
Proof
See Appendix K. \(\square \)
Corollary 4.2 directly provides an upper bound for the gain, of the coded exposure method, in terms of MSE with respect to a snapshot. Given our hypothesis this bound is valid for any code as soon as \(v\Delta t \le 1\). We now depict, in Fig. 2, the upper bound of Corollary 4.2 varying the quantity \(v\Delta t\). Note that the quantity \(v\Delta t\) is inversely proportional to the temporal frequency sampling of the exposure code. Note that the curve is an upper bound. Thus, the actual gain of the coded exposure method is below this curve.
We now illustrate numerically Corollary 4.2 in Fig. 3 and Table 1.
This table provides the evolution of the RMSE varying the intensity of the mean photon emission for a fixed additive (readout) noise variance
Mean photon count per second  36  64  100  225  625  1200  2500  4900  \(10^4\) 
Readout noise  
Variance \(\sigma _r^2\)  100  100  100  100  100  100  100  100  100 
RMSE flutter  42.4  31.83  25.09  16.52  9.84  7.12  4.94  3.57  2.49 
RMSE snapshot  38.75  25.40  18.44  10.88  5.96  4.22  2.89  2.09  1.47 
5 Limitations and discussion
We have proposed a mathematical model of coded exposure cameras. The model includes the Poisson photon (shot) noise and any additive readout noise of finite variance. The model is based on the Shannon–Whittaker framework that assumes bandlimited images. This formalism has allowed us to give closed formulae for the Mean Square Error and Signal to Noise Ratio of coded exposure cameras. In addition, we have given an explicit formula that gives an absolute upper bound for the gain of any coded exposure cameras, in terms of Mean Square Error, with respect to a snapshot. The calculations take into account the whole imaging chain that includes Poisson photon (shot) noise, any additive (readout) noise of finite variance in addition to the deconvolution. Our mathematical model does not allow us to prove that the coded exposure method allows for very large gains compared to an optimal snapshot. This may be the result of an imperfect model of our mathematical coded exposure camera. Indeed, our model assumes that the sensor does not saturate, that the relative camera scene velocity is known, that the scene has finite energy and is observed through an optical system that provides a cutoff frequency, that the additive (readout) noise has a finite variance and neglects the boundaries effects due to the deconvolution. How the results change if one has to, e.g., estimate the velocity is, to the best of our knowledge, an open question.
Singlephoton avalanche detectors (SPADS) is a possible implementation of a photon counter. Any light sensing device that produces, when the quantization is neglected, a signal in biunivocal relationship with the photon count can be w.l.o.g. assumed to be photon counter. The quantization noise will w.l.o.g. be included in the additive (readout) noise later on.
Declarations
Authors' contributions
YT and SO equally contributed to this work. Both authors read and approved the final manuscript.
Acknowledgements
Yohann Tendero is happy to deeply thank “les milieux autorisés”: Igor Ciril (IPSA, MITE), Jérôme Darbon (CNRS, UCLA and the “others offices”), Marc Sigelle (Télécom ParisTech), the one who knocks and the Betches family for their careful readings of the drafts. Indeed, their feedbacks have been duly noted. Research partially supported on the ONR Grants N000141410683, N000141210838, and DOE Grant DESC00183838. Part of this work was done at the UCLA Math. Dept.
Research partially supported on the ONR Grants N000141410683, N000141210838 and DOE Grant DESC00183838.
Competing interests
The authors declare that they have no competing interests.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Authors’ Affiliations
References
 Agrawal, A., Raskar, R.: Resolving objects at higher resolution from a single motionblurred image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–8 (2007)Google Scholar
 Agrawal, A., Raskar, R.: Optimal single image capture for motion deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2560–2567 (2009)Google Scholar
 Agrawal, A., Xu, Y.: Coded exposure deblurring: Optimized codes for PSF estimation and invertibility. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2066–2073 (2009)Google Scholar
 Raskar, R.: Method and apparatus for deblurring images. Google Patents. US Patent 7,756,407 (2010)Google Scholar
 Raskar, R., Agrawal, A., Tumblin, J.: Coded exposure photography: motion deblurring using fluttered shutter. ACM Trans. Graph. 25(3), 795–804 (2006)View ArticleGoogle Scholar
 Raskar, R., Tumblin, J., Agrawal, A.: Method for deblurring images using optimized temporal coding patterns. Google Patents. US Patent 7,580,620 (2009)Google Scholar
 Agrawal, A., Gupta, M., Veeraraghavan, A., Narasimhan, S.G.: Optimal coded sampling for temporal superresolution. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 599–606 (2010)Google Scholar
 Dengyu, L., Jinwei, G., Hitomi, Y., Gupta, M., Mitsunaga, T., Nayar, S.K.: Efficient spacetime sampling with PixelWise coded exposure for highspeed imaging. IEEE Trans. Pattern Anal. Mach. Intell. 36(2), 248–260 (2014)View ArticleGoogle Scholar
 Ding, Y., McCloskey, S., Yu, J.: Analysis of motion blur with a flutter shutter camera for nonlinear motion. In: Proceedings of the SpringerVerlag European Conference on Computer Vision (ECCV), pp. 15–30 (2010)Google Scholar
 Gao, D., Liu, D., Xie, X., Wu, X., Shi, G.: Highresolution multispectral imaging with random coded exposure. J. Appl. Remote Sens. 7(1), 73695–73695 (2013)View ArticleGoogle Scholar
 Holloway, J., Sankaranarayanan, A.C., Veeraraghavan, A., Tambe, S.: Flutter shutter video camera for compressive sensing of videos. In: Proceedings of the IEEE International Conference on Computational Photography (ICCP), pp. 1–9 (2012)Google Scholar
 Huang, K., Zhang, J., Li, G.: Noiseoptimal capture for coded exposure photography. Opt. Eng. 51(9), 093202–10932026 (2012)MathSciNetGoogle Scholar
 Jelinek, J.: Designing the optimal shutter sequences for the flutter shutter imaging method. In: SPIE defense, security, and sensing, pp. 77010–77010. International Society for Optics and Photonics (2010)Google Scholar
 Jelinek, J., McCloskey, S.: Method and system for designing optimal flutter shutter sequence. Google Patents. US Patent 8,537,272 (2013)Google Scholar
 McCloskey, S.: Velocitydependent shutter sequences for motion deblurring. In: Proceedings of the SpringerVerlag European Conference on Computer Vision (ECCV), pp. 309–322 (2010)Google Scholar
 Mccloskey, S.: Acquisition system for obtaining sharp barcode images despite motion. EP Patent 2,284,764 (2011)Google Scholar
 McCloskey, S.: Temporally coded flash illumination for motion deblurring. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 683–690 (2011)Google Scholar
 McCloskey, S.: Fluttering illumination system and method for encoding the appearance of a moving object. Google Patents. US Patent 8,294,775 (2012)Google Scholar
 McCloskey, S.: Heterogeneous video capturing system. Google Patents. US Patent 8,436,907 (2013)Google Scholar
 McCloskey, S.: Motion Deblurring: Algorithms and Systems. Chap. 11: Coded exposure motion deblurring for recognition. Cambridge University Press, Cambridge (2014)Google Scholar
 McCloskey, S., Au, W., Jelinek, J.: Iris capture from moving subjects using a fluttering shutter. In: Proceedings of the IEEE International Conference on Biometrics: theory applications and systems (BTAS), pp. 1–6 (2010)Google Scholar
 McCloskey, S., Ding, Y., Yu, J.: Design and estimation of coded exposure point spread functions. IEEE Trans. Pattern Anal. Mach. Intell. 34(10), 2071–2077 (2012)View ArticleGoogle Scholar
 McCloskey, S., Jelinek, J., Au, K.W.: Method and system for determining shutter fluttering sequence. Google Patents. US Patent 12/421,296 (2009)Google Scholar
 McCloskey, S., Muldoon, K., Venkatesha, S.: Motion invariance and custom blur from lens motion. In: Proceedings of the IEEE International Conference on Computational Photography (ICCP), pp. 1–8 (2011)Google Scholar
 Reddy, D., Veeraraghavan, A., Raskar, R.: Coded strobing photography for highspeed periodic events. In: Imaging systems. Optical Society of America (2010)Google Scholar
 Sarker, A., Hamey, L.G.C.: Improved reconstruction of flutter shutter images for motion blur reduction. In: Proceedings of the IEEE International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 417–422 (2010)Google Scholar
 Schonbrun, E.F., Gorthi, S.S., Schaak, D.: Fluorescence flutter shutter for imaging cells in flow. In: Computational optical sensing and imaging, pp. 4–4. Optical Society of America (2013)Google Scholar
 Tai, Y.W., Kong, N., Lin, S., Shin, S.Y.: Coded exposure imaging for projective motion deblurring. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2408–2415 (2010)Google Scholar
 Tendero, Y.: The flutter shutter camera simulator. Image Process. Line 2, 225–242 (2012)View ArticleGoogle Scholar
 Tendero, Y., Morel, J.M., Rougé, B.: The flutter shutter paradox. SIAM J. Imaging Sci. 6(2), 813–847 (2013)MathSciNetView ArticleMATHGoogle Scholar
 Torii, S., Shindo, Y.: Information processing apparatus and method for synthesizing corrected image data. Google Patents. US Patent 8,379,096 (2013)Google Scholar
 Tsai, R.: Pulsed control of camera flash. Google Patents. US Patent 7,962,031 (2011)Google Scholar
 Tsutsumi, S.: Image processing apparatus and method. Google Patents. US Patent App. 13/104,476 (2011)Google Scholar
 Veeraraghavan, A., Reddy, D., Raskar, R.: Coded strobing photography: compressive sensing of high speed periodic events. IEEE Trans. Pattern Anal. Mach. Intell. 33(4), 671–686 (2011)View ArticleGoogle Scholar
 Xu, W., McCloskey, S.: 2D Barcode localization and motion deblurring using a flutter shutter camera. In: Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), pp. 159–165 (2011)Google Scholar
 Asif, M.S., Reddy, D., Boufounos, P.T., Veeraraghavan, A.: Streaming compressive sensing for highspeed periodic videos. In: Proceedings of the IEEE International Conference on Image Processing (ICIP), pp. 3373–3376 (2010)Google Scholar
 Tendero, Y., Morel, J.M.: An optimal blind temporal motion blur deconvolution filter. IEEE Trans. Signal Process. Lett. 20(5), 523–526 (2013)View ArticleGoogle Scholar
 Tendero, Y., Morel, J.M., Rougé, B.: A formalization of the flutter shutter. J. Phys. 386(1), 012001 (2012)MATHGoogle Scholar
 Tendero, Y.: The flutter shutter code calculator. Image Process. Line 5, 234–256 (2015)View ArticleGoogle Scholar
 Gasquet, C., Witomski, P.: Fourier analysis and applications: filtering, numerical computation, Wavelets. Texts in applied mathematics. Springer, New York (1999)View ArticleMATHGoogle Scholar
 Healey, G.E., Kondepudy, R.: Radiometric CCD camera calibration and noise estimation. IEEE Trans. Pattern Anal. Mach. Intell. 16(3), 267–276 (1994)View ArticleGoogle Scholar
 Kavusi, S., El Gamal, A.: Quantitative study of highdynamicrange image sensor architectures. In: Electronic Imaging 2004, pp. 264–275. International Society for Optics and Photonics (2004)Google Scholar
 Hasinoff, S.W., Durand, F., Freeman, W.T.: Noiseoptimal capture for high dynamic range photography. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 553–560 (2010)Google Scholar
 Fowler, B.A., El Gamal, A., Yang, D.X., Tian, H.: Method for estimating quantum efficiency for CMOS image sensors. In: Photonics West’98 Electronic Imaging, pp. 178–185. International Society for Optics and Photonics (1998)Google Scholar
 Daniels, A.: Field guide to infrared systems, detectors, and FPAs. Field guide series. Society of Photo Optical, Bellingham, Washington (2010)Google Scholar
 Agrawal, A., Xu, Y., Raskar, R.: Invertible motion blur in video. ACM Trans. Graph. 28(3), 95–1958 (2009)View ArticleGoogle Scholar
 Degerli, Y., Lavernhe, F., Magnan, P., Farre, J.: Nonstationary noise responses of some fully differential onchip readout circuits suitable for CMOS image sensors. IEEE Trans. Circuits Syst. II 46(12), 1461–1474 (1999)View ArticleGoogle Scholar
 Lukac, R.: Computational photography: methods and applications, 1st edn. CRC Press Inc, Boca Raton (2010)Google Scholar
 Boracchi, G., Foi, A.: Uniform motion blur in poissonian noise: blur/noise tradeoff. IEEE Trans. Image Process. 20(2), 592–598 (2011)MathSciNetView ArticleGoogle Scholar
 Bony, J.M.: Cours d’analyse: Théorie des distributions et analyse de fourier. Éditions de l’École Polytechnique, Palaiseau (2001)Google Scholar
 Stein, E.M., Weiss, G.L.: Introduction to Fourier analysis on Euclidean spaces. Mathematical series. Princeton University Press, Princenton (1971)MATHGoogle Scholar
 Krantz, S.G.: A panorama of harmonic analysis. Carus Mathematical Monographs 27. Mathematical Association of America, United States (1999)Google Scholar
 Shiryaev, A.N.: Probability. Graduate texts in mathematics. Springer, New York (1984)Google Scholar
 Marks, R.J.: Introduction to Shannon sampling and interpolation theory. Springer texts in electrical engineering. Springer, New York (1991)View ArticleGoogle Scholar
 Chonavel, T., Ormrod, J.: Statistical signal processing: modelling and estimation. Advanced textbooks in control and signal processing. Springer, London (2002)View ArticleGoogle Scholar
 Kovacevic, B., Durovic, Z., Durović, Z.: Fundamentals of Stochastic signals, systems and estimation theory: with worked examples. Springer, Berlin, Heidelberg (2008)View ArticleGoogle Scholar
 Zaknich, A.: Principles of adaptive filters and selflearning systems. Advanced textbooks in control and signal processing. Springer, London (2006)Google Scholar
 Dahlhaus, R.: Mathematical methods in signal processing and digital image analysis. Springer complexity. Springer, Berlin, Heidelberg (2008)View ArticleGoogle Scholar
 Karr, A.F.: Probability. Springer texts in statistics. Springer, New York (1993)Google Scholar
 Resnick, S.: A probability path. Birkhäuser, Boston (1999)MATHGoogle Scholar
 Makarov, B., Podkorytov, A.: Real analysis: measures. Integrals and applications. Universitext. Springer, London (2013)View ArticleMATHGoogle Scholar
 Wiener, N.: The Fourier integral and certain of its applications. Cambridge mathematical library. Cambridge University Press, Cambridge (1988)View ArticleGoogle Scholar
 Durán, A.L., Estrada, R., Kanwal, R.P.: Extensions of the Poisson summation formula. J. Math. Anal. Appl. 218(2), 581–606 (1998)MathSciNetView ArticleMATHGoogle Scholar
 Katznelson, Y.: An introduction to Harmonic analysis. Cambridge Mathematical Library. Cambridge University Press, Cambridge (2004)View ArticleMATHGoogle Scholar
 Butzer, P.L., Stens, R.L.: The EulerMacLaurin summation formula, the sampling theorem, and approximate integration over the real axis. Linear Algebra Appl. 52, 141–155 (1983)MathSciNetView ArticleMATHGoogle Scholar
 Timan, A.F.: Theory of approximation of functions of a real variable. Dover books on advanced mathematics. Pergamon Press, New York (1963)Google Scholar