Skip to content
BY 4.0 license Open Access Published by De Gruyter March 22, 2024

Snapshot spectral imaging: from spatial-spectral mapping to metasurface-based imaging

  • Kaiyang Ding ORCID logo , Ming Wang , Mengyuan Chen , Xiaohao Wang , Kai Ni EMAIL logo , Qian Zhou ORCID logo EMAIL logo and Benfeng Bai EMAIL logo
From the journal Nanophotonics

Abstract

Snapshot spectral imaging technology enables the capture of complete spectral information of objects in an extremely short period of time, offering wide-ranging applications in fields requiring dynamic observations such as environmental monitoring, medical diagnostics, and industrial inspection. In the past decades, snapshot spectral imaging has made remarkable breakthroughs with the emergence of new computational theories and optical components. From the early days of using various spatial-spectral data mapping methods, they have evolved to later attempts to encode various dimensions of light, such as amplitude, phase, and wavelength, and then computationally reconstruct them. This review focuses on a systematic presentation of the system architecture and mathematical modeling of these snapshot spectral imaging techniques. In addition, the introduction of metasurfaces expands the modulation of spatial-spectral data and brings advantages such as system size reduction, which has become a research hotspot in recent years and is regarded as the key to the next-generation snapshot spectral imaging techniques. This paper provides a systematic overview of the applications of metasurfaces in snapshot spectral imaging and provides an outlook on future directions and research priorities.

1 Introduction

The allure of snapshot spectral imaging (SSI) lies in the ability to capture the complete spectral information of a scene in a single “snapshot”, thus enabling real-time dynamic observations, which is particularly suitable for cellular tissue imaging [1], [2], [3], [4], gas diffusion imaging [5], [6], and flame combustion imaging [7], [8]. The ideal SSI system would offer high spatial and spectral resolution while maintaining a compact form factor and computational efficiency. However, achieving this ideal involves tackling a multitude of trade-offs among resolution, capture speed, computational load, and hardware complexity.

At the heart of SSI is the imperative to furnish a comprehensive, accurate representation of the scene’s spectral information. Traditional spectral imaging techniques, such as tunable filters and scanning methods, although effective, are not well-suited for real-time or dynamic scenarios. These methods suffer from the “time-multiplexing dilemma” – the more time taken for capturing spectral data, the less suitable they are for rapidly changing scenes or moving objects.

The quest for better spectral and spatial reconstruction drives recent advancements in SSI. Notable techniques include computational reconstruction methods that use complex algorithms to interpret sensor data, and non-computational methods that rely solely on innovative optical engineering. Each approach has its unique merits and drawbacks in terms of computational efficiency, reconstruction accuracy, and hardware requirements. A particularly promising frontier in SSI is the integration of metasurface technologies, which offer unprecedented control over optical properties at sub-wavelength scales. Metasurfaces can potentially make SSI systems more compact, efficient, and versatile, providing a richer set of degrees of freedom to manipulate incoming light and thereby improve spectral information capture.

In this review, we survey the current state of development of SSI technology based on the differences in data acquisition schemes and detector capture data modes for imaging systems. We first categorize the existing techniques into non-computational spatial-spectral mapping approaches and computationally required coded reconstruction methods, as well as the latest metasurface imaging approaches, summarizing the optical principles, mathematical models, and optical modulation devices necessary for each technique. Finally, we highlight outstanding issues and provide an outlook on future developments, especially in the context of emerging metasurface technologies.

2 Spatial-spectral mapping spectral imaging

The spatial-spectral mapping, that is, the spatial-dimensional image or sub-image of the target 3D spectral cube is tiled on the 2D detector along the spectral dimension, resulting in a one-to-one correspondence between the detector pixels and the cube voxels, and the reconstruction can be completed without the subsequent complex algorithm. The fast imaging allows it to be applied to highly dynamic multi-spectral scenarios, such as astronomical nebula observation, biological tissue analysis, and gas diffusion monitoring [9], [10]. However, since the number of detector array pixels limits the observed spectral cube voxels, such instruments involve spatial and spectral resolution tradeoffs, and need to be optimized according to the scene. The mapping type spectral snapshot imager mainly includes integral field [Figure 1(a and b)] and spatial replication types [Figure 1(c and d)].

Figure 1: 
Data modulation scheme for spatial-spectral mapping spectral imaging, where the numbers “1–9” on the cube represent the fields being integrated and the letter “A” illustrates the spatial scene being replicated. (a and b) SSI with slicer mirrors (a) and lenslet array (b) as integral field units. (c and d) Channel-divided (c) and aperture-divided (d) spatial replication SSI techniques.
Figure 1:

Data modulation scheme for spatial-spectral mapping spectral imaging, where the numbers “1–9” on the cube represent the fields being integrated and the letter “A” illustrates the spatial scene being replicated. (a and b) SSI with slicer mirrors (a) and lenslet array (b) as integral field units. (c and d) Channel-divided (c) and aperture-divided (d) spatial replication SSI techniques.

2.1 Integral field snapshot spectral imaging

The integral field spectrometer (IFS) is a 3D spectral imager developed gradually in the late 1980s, initially mainly for astronomical spectral observations, such as TIGER [11], the first scientific IFS telescope system. Compared with the traditional whiskbroom or pushbroom spectrometer, the IFS adopts the integral field unit (IFU) array instead of the slit, which significantly improves the optical throughput and can capture the weak light spectral features, and is widely used in vivo fluorescence imaging, astronomical remote sensing, and other applications. The IFU continuously cuts the target image into several sub-images, and then reorganizes the sub-images for focus imaging after dispersion. The complete spectral data cube can be obtained by taking a single photo, and fast imaging ensures the ability of dynamic spectral imaging. The IFU greatly affects the spatial and spectral resolution of the IFS, excessive size reduces detector efficiency, while being too small increases crosstalk between each unit images. Therefore, much research has been invested in the optimization design of IFU to obtain better imaging performance. Currently, the IFU can be divided into three main categories: slicer mirrors, lenslet array and optical fiber bundle.

2.1.1 Slicer mirrors

The slicer-based spectrometer is the earliest integral field technology and has been applied to various telescope systems [12]. As shown in Figure 2, it usually consists of a front telescopic system, a slicer mirror, a mirror array, and a collimating spectroscopy system. The slicer mirror, which can be made from a series of reflective elements bonded together at different rotation angles or by ultra-precise micromachining technology [12], cuts the rectangular field of view (FOV) of the fore-optics system into a micro-FOV array and reflects the beam into the pseudo-pupil and then into the spectrometer.

Figure 2: 
Slicer-based IFS (The Slicer mirror shown was manufactured by Prof. Paul Shore).
Figure 2:

Slicer-based IFS (The Slicer mirror shown was manufactured by Prof. Paul Shore).

The slicer-based spectrometers have a simpler structure than lenslets/fiber optics and do not require complex data post-processing. With good optical design and mechanical assembly, these spectral imagers can efficiently utilize the detector arrays to their fullest potential. In order to obtain better spectral resolution, it is often necessary to increase the size of the instrument to obtain a longer light propagation distance after dispersion. The reported MUSE telescope can achieve a resolution better than 3000 (R = λλ is the spectral resolution in a telescopic system), but the size of its entire optical system reaches nearly 3 m in length [13]. In addition, the main drawbacks of such instruments are the high precision required for slicer mirror processing, the optical quality of the contoured surfaces can be as high as 5 nm rms [12], and the narrow slice aperture of the slicer stack can cause severe diffraction (pupil elongation in the direction of dispersion) in the system, resulting in crosstalk [14]. To solve this problem, Zhang et al. adopted spherical slicer mirrors for interference correction to improve spectral resolution [15], [16]. In addition, Content et al. proposed the microslice technique based on the previous research results [17], [18]. The core device is a cylindrical microlens array, that cuts the aperture into rectangular arrays. It transforms the stretched and magnified target images into slit image arrays, and then makes the data cube voxels in the form of narrow bands on the detector. In fact, the principle of the microslice and the pinhole [19] is similar to the lenslet-based spectrometer, which will be described in detail below.

2.1.2 Lenslet array

The lenslet-based IFS was first used in astronomy, the OH-Suppressing Infra-Red Imaging Spectrograph in the KECK telescope, the Gemini Planet Imager in the Gemini-south telescope and the Infra-Red Imaging Spectrograph in the Thirty Meter Telescope (under construction) all utilize lenslet arrays [20], [21], [22]. Due to its low-light imaging ability, the lenslet-based technique has also been gradually developed in biological microscopic imaging [23]. As shown in Figure 3(a), the fore-optics imaged the acquired target information f x , y , λ onto the lenslet array. The process can be described as

(1) f 1 x , y , λ = m , n r e c t x x m D , y y n D f x , y , λ ,

where f 1 x , y , λ is the spectral cube data after lenslet segmentation, x m , y n represents the center coordinate of the m , n sub-image, and D is the diameter of the lenslet.

Figure 3: 
Lenslet-based IFS. (a) Schematic diagram of the optical layout. (b) Experimental setup [25]. (c) Principle of spatial-spectral resolution tunability.
Figure 3:

Lenslet-based IFS. (a) Schematic diagram of the optical layout. (b) Experimental setup [25]. (c) Principle of spatial-spectral resolution tunability.

Then, the dispersion process can be described as

(2) f 2 x , y , λ = m , n r e c t x x m D , y y n D f x , y , λ p λ ,

where p λ is the transfer function of the dispersive element. In general, lenslet arrays are often rotated at a certain angle to avoid spectral aliasing after sub-image dispersion, and the image distribution on the detector is

(3) I x , y = m , n r e c t x x m D , y y n D f x , y , λ p λ r α ,

where r α is the rotation transformation matrix, and optimized designs can be made to align adjacent spectral bands tangentially, making full use of the detector pixels. The lenslet-based spectrometers can be designed with tunable resolution without replacing any optics, and the work of Ji et al. demonstrates a system capable of tunable 0.82–4.17 nm spectral resolution [24]. Figure 3(c) illustrates the tuning process.

The varifocal lens can adjust the size of the sub-image on the detector, according to the formula

(4) d = a f f f c ,

where d and a are the diameters of the sub-image and pupil image, respectively, f f and f c are the collimation and focusing lens focal lengths, respectively.

The spectral band dispersion on the detector can be expressed as

(5) d x d λ = f f d θ d λ ,

where / represents the angular dispersion ability of the dispersive element. An increase in dx/ leads to image dispersion, resulting in spectral overlap.

To eliminate spectral overlap, the lenslet array needs to be rotated, with a final spectral resolution of

(6) d λ d p = d d x d λ = a f c d λ d θ 1 f c .

Although the raw data acquired by the detector appears to be aliased, after spectral calibration, p λ and x m , y n in Eq. (3) can be determined, which in turn allows for the establishment of a correspondence between the detector image elements and the cube voxels, and the reconstruction can be completed by looking up the table.

2.1.3 Optical fiber bundle

Lenslet arrays are often used in combination with fiber bundles to directly couple the pupil image into the fiber core. This approach addresses problems related to optical information and energy loss in the fiber-base IFS, arising from incomplete filling caused by the fiber cladding. Consequently, a high optical throughput is achieved [26], making these spectral imagers are widely applicable in astronomical observation.

As shown in Figure 4, unlike the previous two IFSs, the optical fiber bundle converts the sub-image array from 2D to 1D, presenting a different image format on the detector [27]. Therefore, there is a high degree of design freedom to adjust the spatial sampling of the system to fit the spectral sampling space and improve the detector pixel utilization to meet specific application requirements. Advances in fiber preparation technology have driven the development of highly integrated fibers, effectively improving spatial resolution. This improvement not only leads to more compact instruments, but also offers potential for the development of medical endoscopy [28], [29], [30].

Figure 4: 
Fiber-based IFS, the subgraph (bottom) is the fiber bundle output [27].
Figure 4:

Fiber-based IFS, the subgraph (bottom) is the fiber bundle output [27].

In general, integral field imaging spectrometers utilize conventional grating or prism devices for spectroscopy, and the optical range of the system after dispersion determines its operating bandwidth and spectral resolution to a certain extent. In order to improve the imaging resolution of the system, it is necessary to increase the optical range length on the one hand, and increase the number of detectors or effective pixels on the other hand, which inevitably leads to an increase in the complexity or overall size of the system, and thus numerous researches have been focused on the innovative optical design to improve the imaging quality. Currently, the IFS has also been combined with cutting-edge technologies and has shown exciting results. Examples include combining compressive sensing for spectral imaging, ultrafast burst imaging, and optical fabrication for novel IFUs [31], [32], [33].

2.2 Spatial replication snapshot spectral imaging

Spatial replication spectrometers typically use either a lenslet array (LA) or beam splitters (BS) to perform parallel spatial operations on the target scene. Combined with multispectral filter elements for spectral scanning, these spectrometers generate a set of spectrally separated images on the detector. This setup allows for snapshot imaging through multiple simultaneous sampling.

Depending on the type of optical components employed, spatial replication spectrometers can be divided into two main types: multi-channel beam-splitting spectral imaging and multi-aperture divided spectral imaging.

2.2.1 Multi-channel beam-splitting

The primary component in multichannel beam-splitting spectral imaging is the dichroic filter, specifically designed to either reflect or transmit specific light beams. Dichroic filter arrays have a historical presence, dating back to the 1950s when they were employed in television cameras [34]. In this setup, each beam was received by a different detector, as shown in Figure 5(a). To ensure that the multispectral data received by a single detector, the dichroic filter stack composed of slanted filters was also designed [35], see Figure 5(b).

Figure 5: 
Various types of multispectral beam splitter layouts. (a) Spectral filter array. (b) Spectral filter tilt stack. (c–e) Multichannel prism blocks. (f) Wollaston polarizing beam splitter, P, polarizer; PR, waveplate; WP, Wollaston polarizers.
Figure 5:

Various types of multispectral beam splitter layouts. (a) Spectral filter array. (b) Spectral filter tilt stack. (c–e) Multichannel prism blocks. (f) Wollaston polarizing beam splitter, P, polarizer; PR, waveplate; WP, Wollaston polarizers.

Subsequently, Lang et al. proposed a prism assembly [36] [Figure 5(c)], consisting of three prism blocks. This assembly utilizes total internal reflection and dichroic filters to achieve the separation of spectral channels. Based on their work, Murakami et al. proposed a four-channel separation prism combined with a filter array to achieve mixed-resolution multi-spectral imaging [37] [Figure 5(d)]. In this configuration, one path is received through a multispectral filter array, and higher-resolution spectral reconstruction is performed on this detector data.

In recent years, Greiner et al. and Rothhardt et al. designed a new four-channel BS based on the Kösters prism for space observation [38], [39]. As shown in Figure 5(e), this design uses a dichroic filter and the total reflection at the outer edge surfaces to reflect all four generated beams onto the detector. Additionally, the three prisms in the system allow different assembly layouts to adjust the arrangement of the different band images on the detector. Compared to other designs, the novel Kösters-type BS is more compact and does not require multiple separate detectors.

Harvey et al. [40] introduced enhancements to the traditional Lyot filter by incorporating a Wollaston beam splitting polarizer [Figure 5(f)] in place of the original polarizer to separate the incident beam. This modification, combined with the filtering function of a Lyot birefringent crystal, achieves multi-spectral imaging. Theoretically, if the system adopts N cascaded birefringent interference units, 2N spectral images can be simultaneously imaged on the detector. However, the system’s operating bands are severely limited by the wavelength-dependent dispersion of Wollaston prisms. As a remedy, it becomes necessary to employ different crystal media or introduce a polarization grating for achromatic performance [41], [42], [43].

Obviously, the most distinctive feature of such imaging systems is the cascading of multiple beam-splitting devices to achieve spectral separation in multiple channels. However, the loss of energy and the increase in volume significantly limit the number of spectral channels that can be obtained.

2.2.2 Multi-aperture divided

Multi-aperture divided spectral imaging usually uses a lenslet array to realize spatial replication, combined with multi-spectral filters to realize spectral cube measurement within a single snapshot.

Hirai et al. [46] used this method to design a multi-image Fourier transform spectrometer. This spectrometer utilized a Michelson interferometer with a tilted mirror to generate position-dependent optical path differences for the interference sub-images on the detector. Subsequent 3D spectral reconstruction is achieved by establishing a Fourier transform relationship between the interferogram and the spectrum.

Similarly, Kudenov et al. [44] replaced the Michelson interferometer with a birefringent Nomarski prism to construct a more compact and vibration-resistant snapshot spectrometer, as shown in Figure 6(a and b). The rotation angle of the prism relative to the detector is deliberately designed, allowing the sub-images to have distinct optical path differences. Spectral reconstruction is then accomplished by the Fourier transform.

Figure 6: 
Multi-aperture divided spectral imaging system. (a) Fourier transform snapshot imaging spectrometer [44], two Nomarski prisms (NP) and a half-wave plate (HWP) between them form a birefringent polarization interferometer (BPI). (b) Schematic of the spatial position of the linear optical path differences (OPD) with relation to each sub-image. (c) Schematic of the ORRIS system [45]. (d) Lenslet arrays are arranged tilted to enable spectral scanning of individual sub-images with linear variable filter (LVF). (a and b) Reproduced with permission [44]. Copyright 2012, Optica Publishing Group. (c and d) Reproduced with permission [45]. Copyright 2019, Optica Publishing Group.
Figure 6:

Multi-aperture divided spectral imaging system. (a) Fourier transform snapshot imaging spectrometer [44], two Nomarski prisms (NP) and a half-wave plate (HWP) between them form a birefringent polarization interferometer (BPI). (b) Schematic of the spatial position of the linear optical path differences (OPD) with relation to each sub-image. (c) Schematic of the ORRIS system [45]. (d) Lenslet arrays are arranged tilted to enable spectral scanning of individual sub-images with linear variable filter (LVF). (a and b) Reproduced with permission [44]. Copyright 2012, Optica Publishing Group. (c and d) Reproduced with permission [45]. Copyright 2019, Optica Publishing Group.

Hubold et al. [47] pioneered the imaging scheme involving a tilted LVF combined with LA, as shown in Figure 6(c and d). This approach not only reduces fabrication difficulty and cost but also introduces challenges in balancing the constraints between detector wastage and spectral perturbation due to the LVF tilt angle. In 2019, Mu et al. [45] introduced a modification by tilting the LA instead of the LVF, significantly reducing the spectral perturbations and ensuring stability for efficient detector utilization.

The mechanisms employed by both approaches to achieve simultaneous multisampling of the continuous spectrum are rather similar. Replicated individual spatial sub-images are scanned by different continuous spectra, resulting in complete spatial-spectral data due to the angle between the filter and the detector. However, the size of the LA determines the maximum number of spectral channels that can be realized by the system, and the spatial dimensional replication implies an average distribution of the number of spatial dimensional pixels, where an increase in spectral resolution inevitably leads to a decrease in spatial resolution.

3 Coded reconfiguration spectral imaging

Coded reconfiguration spectral imaging involves encoding the amplitude, phase, or wavelength information of light using specially designed optics. The encoding process can be expressed as a sensing matrix for the system, allowing the establishment of a mathematical model. This, combined with the theory of compressed sensing, enables the reconstruction of a spectral cube, as depicted in Figure 7. Computational spectral imaging is expected to overcome the limitations of traditional methods, achieving higher resolution data with an equal number of detector pixels and enabling more efficient measurements.

Figure 7: 
Data modulation scheme for coded reconfiguration spectral imaging. (a) Coded aperture snapshot spectral imaging (amplitude encoding). (b) Snapshot spectral imaging based on diffractive optical elements (phase encoding). (c) Snapshot spectral imaging based on pixelated filter arrays (wavelength encoding).
Figure 7:

Data modulation scheme for coded reconfiguration spectral imaging. (a) Coded aperture snapshot spectral imaging (amplitude encoding). (b) Snapshot spectral imaging based on diffractive optical elements (phase encoding). (c) Snapshot spectral imaging based on pixelated filter arrays (wavelength encoding).

However, such spectral imaging systems rely on precise calibration and demand substantial computational resources. Currently, there are relatively few practical applications, with the only commercially available choice being pixelated wavelength-encoded snapshot spectral cameras [48].

3.1 Coded aperture spectral imaging

The common spatial modulation devices are randomly distributed binary coding masks. According to their different positions in the optical path, we can divide the coded aperture snapshot spectral imaging (CASSI) into three categories: spectral coding with double-dispersion, spatial coding with single-dispersion, and spatial-spectral coding, corresponding to DD-CASSI, SD-CASSI, and SS-CASSI, respectively [Figure 8(a–c)]. In addition, there are grayscale coding masks instead of 0–1 coding, and color coding masks combined with multispectral filters.

Figure 8: 
Optical layout diagram of the CASSI system. (a) DD-CASSI with the coding mask located between the two dispersion devices. (b) SD-CASSI, the coding mask is located before the dispersion device and simultaneously modulates the spatial-spectral data. (c) SS-CASSI with the coding mask located some distance in front of the detector.
Figure 8:

Optical layout diagram of the CASSI system. (a) DD-CASSI with the coding mask located between the two dispersion devices. (b) SD-CASSI, the coding mask is located before the dispersion device and simultaneously modulates the spatial-spectral data. (c) SS-CASSI with the coding mask located some distance in front of the detector.

3.1.1 DD-CASSI

Brady et al. first proposed a snapshot compressive imaging spectrometer using a coded aperture with a double-dispersion structure (DD-CASSI) in 2006 [49], [50]. This structure uses two dispersive elements for spectral splitting and combining, respectively, with the coded aperture placed in the middle of the system to enable sparse sampling of spectral information. As shown in Figure 8(a), a target f x , y , λ is dispersed spectrally after passing through a dispersive element, which can be expressed as

(7) f 1 x , y , λ = f x α λ λ 0 , y , λ ,

where α is the grating dispersion rate, λ 0 is the initial wavelength. The lens focuses the dispersed target information onto the coding mask of size Δ for modulation:

(8) f 2 x , y , λ = f 1 x , y , λ t x , y t x , y = m , n t m , n rect x Δ m , y Δ n ,

where t is 0 or 1, indicating the transmission pattern of the coding aperture, (m, n) represents the corresponding coding sub-aperture. A subsequent lens system recombines the dispersed spectral information again:

(9) f 3 x , y , λ = f 2 x + λ λ 0 , y , λ ,

and the final lens system focuses the encoded target scene onto the focal plane array (FPA) with a spectral response of r λ :

(10) I x , y = f x , y , λ t x + α λ λ 0 , y r λ d λ .

Due to the sparse sampling process and the superposition of light from different object points and wavelengths, the amount of measurement data is much smaller than the original data cube, and thus often needs to be reconstructed from the measurement data by reconstruction algorithms. To do this, we discretize Eq. (10) as the following:

(11) I i , j = k F i , j , k T i , j + k 1 , k R λ k ,

where F R M × N × L is the scene data cube, M × N is the spatial dimension, L is the spectral dimension, and the coded aperture transmittance is T 0,1 M + L 1 × N × L .

The design of double-dispersion can effectively utilize the detector, but it also leads to the use of too many optical elements, which makes it require a high assembly precision. In 2021, Reflective DD-CASSI [51], [52] proposed by Yu et al. used a reflective coded aperture and a folded optical path to obtain a compact structure and realized two dispersion processes using only one dispersive element, but the system optical throughput was reduced due to the use of beam splitter mirrors.

3.1.2 SD-CASSI

In single-dispersion CASSI (SD-CASSI) [53], only one dispersive element is used, the scene light is first spatially sampled by the coded aperture before passing through the dispersive element, and the detector accepts the integral of the individual spectral images after misalignment. Therefore, SD-CASSI requires a larger detector with a smaller coded aperture than DD-CASSI. Similarly, the data obtained at the detector can be expressed as

(12) I i , j = F i , j k + 1 , k T i , j k + 1 , k R λ k + N i , j .

In addition, Cao et al. proposed a system with a more simplified structure [54], which consists only of a coded mask, a prism, and a grayscale camera. The well-designed mask samples the space so that the spectra of each sampling point do not overlap each other on the detector, and can be used to acquire spectral video in real time without reconstruction algorithms, at the expense of spatial resolution. Instead of static binary encoded apertures, some scholars have used a digital micromirror device (DMD), whose rotation angle of each micromirror can be individually controlled at high speeds, to enable encoded apertures for grayscale values [55], [56], [57], which provides an improvement in reconstruction accuracy.

3.1.3 SS-CASSI

In 2014, Lin et al. proposed a dual-coding system design [58], where the scene light is first focused onto a DMD for spatial coding, then dispersed by a diffraction grating, and finally spectrally modulated by a liquid crystal on a silicon display. This scheme increases the cost and requires additional calibration. They later improved the scheme [59] by placing the coding aperture of the SD-CASSI some distance before the detector [Figure 8(c)] to achieve spatial-spectral co-modulation, and the experimental results were significantly better than those of the SD-CASSI. In addition, Rueda et al. also proposed a dual-coding system [60], which introduces two high-resolution coding templates with different transmittances to encode the spatial-spectral information, but it requires multiple measurements to realize the high-resolution spectral reconstruction.

3.2 Pixelated filter array spectral imaging

Pixelated filtered spectral imaging enables wavelength coding by transmitting or reflecting a specific spectrum through a periodic arrangement of pixel-level units directly deposited or monolithically integrated on a detector. This allows the simultaneous acquisition of spatial and spectral information. The imaging process can be expressed as

(13) I x , y = f x , y , λ T x , y , λ = f x , y , λ i , j t i , j λ r e c t x B i , y B j ,

where T x , y , λ denotes the filter array transmittance function, i , j is the coordinate of the sub-filter.

In addition, there is a diversity of structural designs of pixel-level filter units, aiming at more wavelength bands, higher transmittance, and more cost-effective spectral imaging. According to the different fabrication methods they can be classified as: color pigment filters, Fabry–Pérot filters and sub-wavelength patterned grating filters (as well as metasurface filters, which will be discussed in Section 4).

3.2.1 Color filter array

Bayer arrays [Figure 9(a)] have been widely used in modern photographic equipment since they were proposed in 1976 [61]. It consists of 2 × 2 three primary color filters, where different colors are achieved by transmitting (absorbing) layers of organic or pigment dyes [62]. Employing interpolation algorithms, it is possible to obtain an RGB image from a Bayer image that matches the vision of the human eye. Nowadays, the application of hyperspectral data makes the reconstruction of spectral information from RGB images of interest to researchers [63].

Figure 9: 
Different multispectral filter array (MSFA) designs [64]. (a) Bayer 2 × 2 array. (b–c) 3 × 3 and 4 × 4 MSFA. (d–e) Uniformly and randomly distributed MSFA. (f) MSFA with hexagonal pixels.
Figure 9:

Different multispectral filter array (MSFA) designs [64]. (a) Bayer 2 × 2 array. (b–c) 3 × 3 and 4 × 4 MSFA. (d–e) Uniformly and randomly distributed MSFA. (f) MSFA with hexagonal pixels.

Additionally, increasing the number of filter bands is the most direct and effective way to improve the imaging quality. Similar to RGB cameras, multispectral cameras based on the division-of-focal-plane (DoFP) technique usually carry four or more channels of filters, which are designed for spatial uniformity and spectral consistency [65] to form a multispectral filter array [Figure 9(b and c)], so that multispectral data can be acquired in a single shot [64], [66]. However, this sacrifices spatial resolution to some extent, and missing information at different locations and bands needs to be filled in by demosaicing algorithms [67], [68].

To facilitate digital storage and processing, almost all MSFAs use square pixels, but hexagonal pixels [Figure 9(f)] have also been reported [69]. Most multispectral cameras based on DoFP technology combine filters of different wavelength bands into hyperpixels, which are then aligned on the FPA [64]. Aggarwal et al. proposed two different alignment patterns: uniform [Figure 9(d)] and random [Figure 9(e)] distribution [70], where randomly distributed MSFAs can be reconstructed as spectral images by compression-aware algorithms.

3.2.2 Fabry–Pérot filter array

Fabry–Pérot (F–P) MSFA can be obtained by integrating multiple micro F–P filters that allow the transmission of a single wavelength, and it is usually necessary to fabricate multiple F–P microcavities of different lengths to produce a series of different narrow-band optical transmissions, as shown in Figure 10.

Figure 10: 
Schematic diagram of the integrated F–P filter [71]. (a) 3D structure of a multilayer F–P filter. (b) Transmission spectrum of the filter array. Reproduced with permission [71]. (a and b) Copyright 2022, Chinese Laser Press.
Figure 10:

Schematic diagram of the integrated F–P filter [71]. (a) 3D structure of a multilayer F–P filter. (b) Transmission spectrum of the filter array. Reproduced with permission [71]. (a and b) Copyright 2022, Chinese Laser Press.

In 2004, Correia et al. [72] designed an arrayed FP spectrometer with 16 bands by varying the thickness of SiO2 deposited on CMOS, overcoming previously developed F–P spectrometers with limited bands and the need for displacement actuation. Almost concurrently, Wang et al. successively proposed dielectric-type structure and dual-cavity structure FP filters [73], and realized spectral imaging with more bands by improving the lithography [74] and deposition processes [75]. Currently, most of the related researches focus on the efficient or high-precision fabrication of cavity layers with different thicknesses (nanoimprinting [76], electron beam lithography [77], laser direct write lithography [78]).

3.2.3 Subwavelength patterned grating filter array

Conventional diffraction gratings are often used as dispersive devices for spectral imaging. The phenomena of guided mode resonance or surface plasmon resonance allow well-designed sub-wavelength patterned gratings to be used as narrow-band filters [79], [80], [81], [82]. Thus, subwavelength patterned gratings customized by varying grating parameters (thickness, period, duty cycle) can provide significant transmission for specific wavelengths, and pixelated grating filter arrays can be integrated with sensors for snapshot spectral imaging.

In 2010, Haïdar et al. [83] fabricated a mid-wave infrared filter array with 10 wavelength bands using symmetric, freestanding sub-wavelength metallic gratings, and this grating structure with a narrow slit theoretically enables near-perfect optical transmission, as shown in Figure 11(a and b).

Figure 11: 
Patterned grating color filters [82], [84]. (a) Schematic diagram of the plasmonic nanoresonators. (b) Simulated transmission spectra for the RGB filters. (c–d) SEM image and transmissive spectra of two-dimensional hole array plasma filters. (a and b) Reproduced with permission [82]. Copyright 2010, Nature Publishing Group. (c–d) Reproduced with permission [84]. Copyright 2012, Springer.
Figure 11:

Patterned grating color filters [82], [84]. (a) Schematic diagram of the plasmonic nanoresonators. (b) Simulated transmission spectra for the RGB filters. (c–d) SEM image and transmissive spectra of two-dimensional hole array plasma filters. (a and b) Reproduced with permission [82]. Copyright 2010, Nature Publishing Group. (c–d) Reproduced with permission [84]. Copyright 2012, Springer.

However, considering the polarization-sensitive nature of one-dimensional gratings, some scholars have begun to design two-dimensional sub-wavelength structures. Chen et al. [84] proposed periodic sub-wavelength hole arrays, which achieve transmission at different wavelengths by changing the period of the cells, see Figure 11(c and d). This plasma color filter was directly integrated into the sensor using electron beam lithography and dry etching engraving to achieve panchromatic sensitivity in the visible band of 100 × 100 pixels. Subsequently, numerous similar arrays of sub-wavelength hole structures have been proposed for high-performance filtering in different wavelength bands [85].

Delving further, these studies, in conjunction with the field of surface plasmonics, have given rise to a cutting-edge scientific discipline known as metasurfaces. This emerging frontier offers novel avenues for advancements in spectral imaging.

3.3 Diffraction modulation spectral imaging

A frequently employed modulator in spectral imaging is the diffractive optical element (DOE), a phase modulator that introduces an additional phase change to the light field by altering the shape of the DOE. This alteration allows for the acquisition of a wavelength-dependent point spread function (PSF), enabling the distinction of light at various wavelengths. With different types of DOEs, phase-encoded spectral imaging can be categorized into following three groups.

3.3.1 Diffractive diffuser

The earliest diffraction spectral snapshot imaging was the computed tomography imaging spectrometer (CTIS) proposed by Okamoto et al. in 1991 [86], which has been gradually developed and applied to biomedical, astronomical, agricultural and other fields [87], [88], [89]. In this system [Figure 12(a)], the object f x , y , λ passes through the diffraction grating (can be regarded as a diffuser) and is collected by the FPA:

(14) I x , y = 0 r λ f x , y , λ * p x , y , λ d λ ,

where * denotes convolution, r λ is the spectral response of the detector, and p x , y , λ is the PSF of the CTIS system, which can be expressed as

(15) p x , y , λ = i = I I j = J J a i j δ x i λ f d , y j λ f d ,

where δ is the Dirac delta function, I, J is the maximum diffraction order, a ij is the corresponding diffraction efficiency, f is the focal length of the imaging lens, and d is the grating constant.

Figure 12: 
The computed tomography imaging spectrometer. (a) System optical layout. (b) Schematic diagram of the computed tomography imaging spectrometer [90]. (c) Multiple diffraction pattern image simulation [91]. (b) Reproduced with permission [90]. Copyright 2023, Optica Publishing Group.
Figure 12:

The computed tomography imaging spectrometer. (a) System optical layout. (b) Schematic diagram of the computed tomography imaging spectrometer [90]. (c) Multiple diffraction pattern image simulation [91]. (b) Reproduced with permission [90]. Copyright 2023, Optica Publishing Group.

The raw data on the FPA can be represented as

(16) I x , y = i = I I j = J J a i j 0 r λ f x i λ f d , y j λ f d , λ d λ .

As shown in Figure 12(b), the acquired image contains projections of the data cube along multiple angles, where the 0th order of diffraction is equivalent to the integration of the cube data along the spectral axis on the FPA. The remaining diffraction orders of light undergo diffraction for wavelength-dependent spatial shifts, resulting in dispersion in all directions.

Additionally, numerous studies have focused on improving the spectral resolution, and diffraction gratings with different projections [92], [93], [94], [95] have been designed [Figure 12(c)] to improve the performance of CTIS, taking into account the spatial spectral trade-off, FPA factor, and missing cone angle [96], [97].

All the above diffraction grating modulators realize regular discrete phase modulation, and some random phase modulation devices have also been used for spectral imaging, e.g., Wang et al. designed a random discrete phase device [Figure 13(a)], which can be used as a filter for spectral imaging after arraying [98], and Sahoo et al. proposed a random continuous phase modulation device (scattering medium) [Figure 13(b)], which also utilizes the variability of the PSF in different spectral bands to achieve spatial-spectral information modulation [99].

Figure 13: 
Schematic of multi-spectral imaging of scattering medium. (a) Diffuser consisting of a designed array of randomized highly diffractive filters. (b) Light travels through a strong scattering medium and produces a speckle pattern on a monochrome camera.
Figure 13:

Schematic of multi-spectral imaging of scattering medium. (a) Diffuser consisting of a designed array of randomized highly diffractive filters. (b) Light travels through a strong scattering medium and produces a speckle pattern on a monochrome camera.

3.3.2 Diffractive lens

Unlike diffractive diffusers, diffractive lenses generally have a rotationally symmetric structure and belong to axial chromatic DOE. They can achieve chromatic imaging without the need for additional lenses, hence they are also referred to as lensless spectral imaging. Their characteristics include being lightweight and compact, with advantages in scalability and field of view.

For a diffractive spectral imaging system [Figure 14(a)], the incident light field with initial phase ϕ 0 and amplitude A 0 at the front surface of the DOE can be expressed as

(17) u 0 x , y , λ = A 0 e i ϕ 0 x , y , λ .

Figure 14: 
Spectral imaging with diffraction lenses. (a) Imaging via a diffractive lens and its PSF. (b) Diffractive lens designed in [100]. Reproduced with permission [100]. (a and b) Copyright 2019, Association for Computing Machinery.
Figure 14:

Spectral imaging with diffraction lenses. (a) Imaging via a diffractive lens and its PSF. (b) Diffractive lens designed in [100]. Reproduced with permission [100]. (a and b) Copyright 2019, Association for Computing Machinery.

As the light field passes through the DOE, a phase shift generated by the DOE surface profile h x , y will be obtained:

(18) ϕ h x , y , λ = 2 π λ n λ n 0 λ h x , y ,

where n λ and n λ 0 denote the refractive index of the DOE and the propagation medium, respectively, and the modulated light field at the rear surface of the DOE is expressed as

(19) u 1 x , y , λ = A 0 e i ϕ 0 x , y , λ + ϕ h x , y , λ .

After Fresnel diffraction [101] it propagates forward a distance z to reach the detector, which leads to

(20) u 2 x , y , λ = e i 2 π z λ i λ z u 1 x , y , λ e i π λ z x x 2 + y y 2 d x d y ,

where x , y represents the coordinates of image space, and the PSF can be obtained from Eq. (19) as

(21) p x , y , λ F A 0 e i ϕ 0 x , y , λ + ϕ h x , y , λ + π λ z x 2 + y 2 2 ,

where F denotes Fourier transform. If u 0 x , y , λ is a plane light field, Eq. (21) can be simplified as

(22) p x , y , λ F e i ϕ h x , y , λ + π λ z x 2 + y 2 2 .

Similarly with Eq. (14), the image obtained on the detector can be expressed as

(23) I x , y = 0 r λ u 0 x , y , λ * p x , y , λ d λ .

Jeon et al. proposed rotational diffraction snapshot spectral imaging [100], where the height profile of the designed DOE is a three-equal spiral structure, see Figure 14(b), enabling it to form a PSF with a three-wing shape that rotates with the spectral changes. The PSF’s anisotropy and size consistency greatly improve the quality of the spectral reconstruction performed by the devised end-to-end network. Hu et al. extended the work of [100] by realizing a PSF with two wings and achieved similar results [102]. Similarly, Xu et al. [103] improved the DOE design and reconstruction algorithm in [100]. Baek et al. proposed a DOE with a PSF that varies with depth and spectral [104], making it possible to reconstruct spectral and depth from a single captured image. In addition, Li et al. noted that the fabrication of DOEs requires the quantization of the surface height, and better results were obtained by taking the effect of quantization into account in both DOE design and image reconstruction [105].

3.3.3 Diffractive optical network

Diffractive optical networks (or diffractive deep neural networks, D2NN) were first proposed by Lin et al. in 2018, which realize the function of neural networks in an all-optical manner through multiple diffractive surfaces [106]. Li et al. used a trained diffractive network to encode spatial information into feature spectral, realizing the task of terahertz spectral classification under a single-pixel detector, and proving that the diffractive network can be used for multi-wavelength information processing [107].

In 2023, Mengu et al. [108] introduced a diffractive optical network into multispectral snapshot imaging, as shown in Figure 15. This method performs spatial coherent imaging over a broad spectrum while simultaneously directing a pre-determined set of spectral channels to a pixel array on the output plane. In essence, the approach converts a monochromatic focal plane into a virtual MSFA. Notably, this method eliminates the need for spectral filters or image recovery algorithms.

Figure 15: 
Snapshot spectral imaging based on diffractive optical networks [108]. Reproduced with permission [108]. Copyright 2023, Nature Publishing Group.
Figure 15:

Snapshot spectral imaging based on diffractive optical networks [108]. Reproduced with permission [108]. Copyright 2023, Nature Publishing Group.

3.4 Combined modulation spectral imaging

In order to enhance the flexibility of modulation and increase the degree of spatial-spectral information multiplexing, some researchers have performed combined modulation to improve the imaging performance, but this also leads to a complex system architecture and additional reconfiguration computational cost.

3.4.1 Amplitude & wavelength modulation

Arce et al. incorporated pixel-level spectral filters into CASSI, significantly enhancing CASSI’s modulatory capabilities in the spectral domain. They argue that the increased modulation depth better satisfies the restricted isometry property (RIP) during the measurement process [109]. They mainly introduced two configurations: The first replaces the binary coded aperture in SD-CASSI with a spectral filter array [109], [110], and the second configuration employs an image sensor based on an MSFA, designed to capture light dispersed by a dispersive element in the scene [111], [112]. The spectral response function at each detector pixel is position-dependent; by rotating the dispersive element in the system, multiple snapshots can be obtained for improved imaging [113]. Additionally, they analyzed the application of RGB image sensors in SD-CASSI and found that they provide superior imaging performance compared to monochromatic sensors [114].

3.4.2 Phase and wavelength modulation

Some researchers combine DOE with sensors based on MSFA to achieve SSI [115], [116]. A spectral diffusercam was proposed in [116], building on earlier work [117], where the PSF was designed to span multiple superpixels. In this configuration, the light received on each superpixel is a composite from various points, thereby achieving compressed imaging. Another work [118] replaced the MSFA-based sensor with a color-coded aperture set at a specific distance from a monochromatic sensor, resulting in a shift-variant PSF. They further discussed the impact of filter distribution on the color-coded aperture on the reconstructed spectral. Kim et al. generated wavelength-dependent PSFs using LVF in combination with phase masks [119] to encode the spectral information with lensless imaging onto a monochrome image sensor.

3.4.3 Amplitude and phase modulation

Kar et al. [120] proposed a diffractive lens compressive spectral imaging, where the coded aperture spatially modulates the light field in the scene and uses a diffractive lens for dispersion and focusing. However, this technique requires multiple measurements to achieve diversity sampling and sparse reconstruction algorithms to recover spectral data.

3.5 Spectral reconstruction technologies

Different coded modulation techniques, despite their diverse physical architectures, share a common forward measurement model due to the inherent nature of their compressed measurements. This model can be expressed as the following system of linear equations:

(24) y = Φ f + n ,

where y denotes the system measurement, f is the full datacube with size M × N × L, Φ R M N × M N L is the sensing matrix, and n is system noise.

This solution problem is a typical ill-posed problem, which is generally based on two approaches, i.e., solving the convex optimization problem by constructing constraints, or using data-driven deep learning methods to achieve data prediction.

Reconstruction methods based on optimization algorithms directly formulate optimization problems using the system’s forward model and manually chosen prior information. The prior information utilized in constructing optimization problems often draws from methods traditionally employed in image reconstruction and restoration, such as sparse priors [121], smoothness priors [122], low-rank priors [123], etc. Sparse priors assume that the data can be represented in a transformed domain as f = Ψt, and the representation in the transformed domain, t, exhibits sparsity. This transformation can be the Fourier transform, wavelet transform, and so on. The resulting optimization problem is formulated as

(25) f ̂ = Ψ argmin t y ΦΨ t 2 2 + τ t 1 ,

where the 1-norm is used to constrain sparsity. The smoothness prior is based on the observation that natural images often exhibit a certain degree of smoothness. An optimization problem can be formulated as

(26) f ̂ = argmin f y Φ f 2 2 + τ f TV ,

where the regularization term, total variation (TV), calculates the gradient of adjacent pixels in the image, serving as a common metric for smoothness in image reconstruction. The low-rank prior, considering internal data correlations, stems from the fact that natural images tend to have low-rank structures. The optimization problem can be expressed as

(27) f ̂ = argmin f y Φ f 2 2 + τ f * ,

where the regularization term involves the nuclear norm.

Algorithms for solving these optimization problems include TwIST [124], GPSR [125], GAP-TV [126], and so on. However, these algorithms require iterative computations, resulting in long computation times and limited precision in the solutions. In 2019, Liu et al. proposed a rank-minimization-based method called DeSCI [127] specifically for CASSI. They reported a peak signal-to-noise ratio (PSNR) exceeding 8.27 dB, surpassing previous algorithms. However, the reconstruction process still took several minutes. Due to the manual determination of regularization terms in optimization problems, they may not accurately capture real-world scenarios. This limited generalization makes it challenging to achieve the detailed reconstruction of complex targets.

In recent years, advancements in artificial intelligence technology have led to the development of novel algorithms tailored to diverse physical architectures in SSI. These algorithms, focused on learning general patterns from extensive data, exhibit characteristics such as real-time processing, high reconstruction accuracy, and richer reconstruction details compared to traditional iterative methods [128], [129].

The CASSI system has a simple system structure and thus has received more attention in terms of reconfiguration algorithms. Proposed methods include end-to-end frameworks [130], deep unfolding frameworks [131], and others. The current state-of-the-art (SOTA) model, DAUHST [131], employs a transformer-based deep unfolding approach, reporting a PSNR as high as 38.36 dB. This represents a significant improvement of 15.24 dB over the classical TwIST iterative algorithm.

For narrowband filter array applications, numerous studies are dedicated to recovering spectral images from RGB images [63]. Since this can be considered as three-channel multispectral imaging, spectral reconstruction methods designed for RGB can be extended to multispectral filter arrays. Pixel-wise spectral reconstruction methods, such as manifold learning [132] and basis function fitting [133], attempt to directly recover spectral information from measurements in each channel but overlook the spectral information correlation between pixels. Considering this correlation, patch-wise spectral reconstruction methods can achieve better results. The transformer-based MST++ [134], leveraging the spatially sparse yet spectrally self-similar nature of hyperspectral imaging, has achieved high-precision reconstruction and secured the first position in the NTIRE 2022 Challenge on Spectral Reconstruction from RGB.

For CTIS systems, recent efforts have utilized CNN [91] and GAN [90]-based frameworks for spectral reconstruction, resulting in improvements in speed and accuracy.

4 Metasurface-based spectral imaging

Metasurfaces, comprised of sub-wavelength scale nanostructures, offer a flexible means to modulate optical information, including amplitude, phase, and wavelength of light in an unconventional manner. This flexibility facilitates the design of highly customized and integrated spectral imaging solutions. Figure 16 illustrates both spatial-spectral mapping and coded reconstruction spectral imaging techniques achieved through the incorporation of metasurfaces.

Figure 16: 
Data modulation scheme for metasurface-based spectral imaging. (a) Using micro-metalens array instead of lenslet array and dispersion devices for field segmentation and dispersion. (b) Using pixelated metasurface substitution to achieve spatial and spectral modulation.
Figure 16:

Data modulation scheme for metasurface-based spectral imaging. (a) Using micro-metalens array instead of lenslet array and dispersion devices for field segmentation and dispersion. (b) Using pixelated metasurface substitution to achieve spatial and spectral modulation.

4.1 Spatial-spectral mapping with metasurface

Traditional spatial-spectral mapping systems face challenges such as the substantial space occupied by field splitters, spatial replicators, and dispersive elements, along with issues related to optical path alignment. Recent studies have shown that the tremendous design flexibility offered by metasurfaces can substantially replace complex device cascades, achieving multi-functionality in a single device. For example, a comparison between Figures 1(b) and 16(a) illustrates this capability, resulting in ultra-compact, high-performance snapshot spectral imaging. In the following sections, we will introduce the latest applications of metasurface-based SSI aligned with traditional snapshot approaches.

4.1.1 Integral field type

The work in [135], [136], [137] demonstrates metasurface structures for customized dispersion, where well-designed off-axis focusing angles can be obtained for the dispersion of interest. This capability paved the way for the design of off-axis focusing metalenses with simultaneous focusing and dispersion capabilities. On this basis, Hua et al. [138] designed micro-metalenses with transverse dispersion capability [Figure 17(a–c)] and arranged them into an array to create a compact spectral light-field imaging system [Figure 17(d–f)]. This system seamlessly replaces the lenslet arrays and dispersive devices essential in traditional IFS. A single snapshot from this system can reconstruct spectral and 3D spatial information, offering a spectral resolution of 4 nm and a spatial resolution close to the diffraction limit.

Figure 17: 
Integral field type SSI with metasurface [138]. (a) Principle of transversely dispersive metalens. (b) Simulation of the transverse distribution of focal spots at different wavelengths. (c) Image of the letter 4 by a single metalens with white light illumination. (d–f) Images of the object scene from different viewpoints. (e) Raw data acquired by detector. (f) Color image after all-focus rendering. (a–f) Reproduced with permission [138]. Copyright 2022, Nature Publishing Group.
Figure 17:

Integral field type SSI with metasurface [138]. (a) Principle of transversely dispersive metalens. (b) Simulation of the transverse distribution of focal spots at different wavelengths. (c) Image of the letter 4 by a single metalens with white light illumination. (d–f) Images of the object scene from different viewpoints. (e) Raw data acquired by detector. (f) Color image after all-focus rendering. (a–f) Reproduced with permission [138]. Copyright 2022, Nature Publishing Group.

Similarly, Chabot et al. [139] proposed a novel slicer concept employing a metasurface to regulate the phase on the glass substrate of the slicer stack, achieving the desired tilt of each slice. The performance of the metasurface slicing concept system was simulated and analyzed by software such as Zemax and PlanOpsim.

4.1.2 Spatial replication type

McClung et al. [140] proposed a multi-aperture parallel metasurface snapshot spectral system (MSSI), capable of 20 spectral channels in the range of 795–980 nm, as shown in Figure 18(a). Each channel system comprises of a metalens doublet and a metasurface-tuned filter. The metalens doublet, consisting of a corrector and a focuser, is composed of 485 nm-tall α-Si nanocolumns with square cross-sections, as shown in Figure 18(b and c). These elements facilitate spatial image replication and chromatic aberration correction within specific wavelength bands. The metasurface-tuned filter is a narrow-band filter with an array of super-atoms sandwiched between distributed Bragg reflectors. The filter’s center wavelength is tuned by varying the nanocolumn diameter.

Figure 18: 
Spatial replication type SSI with metasurface [140], [141]. (a) MSSI schematic and operating principle. (b) Schematic diagram of the micro-nano structure of the metasurface. (c) Multi-channel imaging schematic. (d) Normalized image array of different spectral images. (e) The focusing efficiency of the metasurface at different wavelengths. (f) Diagram of the image captured by the detector. (a–d) Reproduced with permission [140]. Copyright 2020, American Association for the Advancement of Science. (e–f) Reproduced with permission [141]. Copyright 2023, Nature Publishing Group.
Figure 18:

Spatial replication type SSI with metasurface [140], [141]. (a) MSSI schematic and operating principle. (b) Schematic diagram of the micro-nano structure of the metasurface. (c) Multi-channel imaging schematic. (d) Normalized image array of different spectral images. (e) The focusing efficiency of the metasurface at different wavelengths. (f) Diagram of the image captured by the detector. (a–d) Reproduced with permission [140]. Copyright 2020, American Association for the Advancement of Science. (e–f) Reproduced with permission [141]. Copyright 2023, Nature Publishing Group.

Lin et al. [141] proposed a multi-wavelength off-axis focusing mirror (MOFM) to realize four-channel spectral imaging. This innovative metasurface device, constructed from multi-resonance plasma super-atoms, was subsequently combined with a small-data learning theory to realize 18-channel spectral imaging in the visible band, as shown in Figure 18(e and f).

4.2 Coded reconfiguration with metasurface

Empowered by efficient computational acceleration, metasurfaces can achieve nearly all forms of traditional coded aperture spectral imaging at a much lower cost, especially enabling on-chip spectral imaging to become possible. The optical field modulation realized by metasurfaces can be multifaceted. It can flexibly replace conventional coding devices to realize joint modulation. We present here the improved design and development potential of conventional systems in terms of amplitude-coded, phase-coded, and wavelength-coded, as well as the potential for development.

4.2.1 Improved amplitude-coded spectrometer

Traditional CASSI systems typically achieve coded modulation by placing binary pixelated coding mask at different positions in the optical path. When combined with dispersive elements, sparse spectral imaging can be realized. However, CASSI systems are generally bulky, and the utilization of fixed coding patterns impacts the incoherence of the system matrix, thereby limiting there practical applications.

The work of Yako et al. in 2023 demonstrated the realization of CASSI using a single device integrated with a detector [142], as shown in Figure 19. They used an optimally-designed F–P coding mask to achieve three-dimensional encoding. Specifically, varying the thickness of the F–P cavity produces different transmission spectra. The spectral transmittance of the mask cells enables the corresponding spectra to be gray-scale encoded, and the highly compressed images are subsequently decoded and reconstructed by the TwIST algorithm. Notably, the manufacturing difficulty of this F–P design is directly proportional to the number of transmission spectra.

Figure 19: 
Schematic diagram of a video-rate hyperspectral camera [142]. (a) Schematic diagram of spatial-spectral coding modulation. (b) Transmission patterns of coded mask at different wavelengths. (c) Integration of detector and coded mask. (a–c) Reproduced with permission [142]. Copyright 2023, Nature Publishing Group.
Figure 19:

Schematic diagram of a video-rate hyperspectral camera [142]. (a) Schematic diagram of spatial-spectral coding modulation. (b) Transmission patterns of coded mask at different wavelengths. (c) Integration of detector and coded mask. (a–c) Reproduced with permission [142]. Copyright 2023, Nature Publishing Group.

Similarly, Yang et al. [143] developed a free-form metasurface-based spectral imaging chip to achieve fast spectral imaging. In contrast to those metasurface with excellent spectral filtering properties, the proposed ones are easier to fabricate.

4.2.2 Improved wavelength-coded type

The great potential of novel optical metasurfaces for spectral filtering was mentioned in the previous section, and here we categorize them into two types, narrowband and broadband, for detailed elaboration.

4.2.2.1 Narrowband filters array

Utilizing the interaction between micro-nanostructures and electromagnetic waves, a variety of metasurface-based narrowband spectral filters have been designed to be used as MSFAs. These micro-nanostructures can take forms such as all-dielectric resonators [144], [145], [146], hybrid plasmonic-dielectric nanostructures [147], and so on.

These metasurface-based MSFAs can be paired with detectors to directly read out values across different spectral bands, see Figure 20. In 2018, Shaltout et al. [148] designed a compact F–P nanocavity embedded in a metasurface. Different resonant wavelengths were obtained by optimally designing the width of the metasurface structural unit, while multiple cavities allowed by the same planar chip ensured multispectral filtering capability, with the potential to be extended to on-chip spectral imaging. Lee et al. [149] integrated dielectric multilayer film filters into CMOS image sensors and adjusted the transmission wavelengths of the corresponding spectral channels by changing the size and position of the Si nanopillar arrays embedded into the respective pixel multilayer resonant cavities. Remarkably, this design obviates the need for the alignment of optical elements, and the system includes 20 channels, covering a range from 700 to 950 nm, with each spectral channel having a Full Width at Half Maximum (FWHM) of 2.0 nm.

Figure 20: 
Schematic of metasurface narrowband filtering [144]. (a) Pixelated metasurface image. (b) SEM micrograph. (c) Diagram of the imaging system. (d–e) Reflection images of the pixelated metasurface recorded at four specific wave numbers. Reproduced with permission [144]. (a–e) Copyright 2018, American Association for the Advancement of Science.
Figure 20:

Schematic of metasurface narrowband filtering [144]. (a) Pixelated metasurface image. (b) SEM micrograph. (c) Diagram of the imaging system. (d–e) Reflection images of the pixelated metasurface recorded at four specific wave numbers. Reproduced with permission [144]. (a–e) Copyright 2018, American Association for the Advancement of Science.

4.2.2.2 Broadband filters array

For spectral detection, methods based on broadband random encoding have been proposed, such as quantum dot spectrometers [150], spectrometers based on photonic crystal slabs [151], and spectrometers based on metasurfaces and multilayer films [152], etc. These methods rely on the principle of wavelength multiplexing, where spectral are modulated using elements with different spectral responses, and finally the original spectral information is reconstructed from the modulated detection data. The micro-nano structures on the metasurface enable it to have a certain wavelength response over a wide spectral range, making it suitable for the fabrication of broadband MSFAs, see Figure 21.

Figure 21: 
Schematic of metasurface broadband filtering [153]. (a) Diagram of the imaging system consisting of the metasurface layer, microlens layer and image sensor layer and SEM images of the three shape patterns. (b) Optical micrographs of 20 × 20 different metasurface units fabricated. A micro-spectrometer can contain 5 × 5 or 7 × 7 cells as shown in the red/green/yellow boxes. (c) Transmission spectra of four metasurface cells with free-form meta-atoms and effective indices of the Bloch mode. (a–c) Reproduced with permission [153]. Copyright 2022, Wiley-VCH.
Figure 21:

Schematic of metasurface broadband filtering [153]. (a) Diagram of the imaging system consisting of the metasurface layer, microlens layer and image sensor layer and SEM images of the three shape patterns. (b) Optical micrographs of 20 × 20 different metasurface units fabricated. A micro-spectrometer can contain 5 × 5 or 7 × 7 cells as shown in the red/green/yellow boxes. (c) Transmission spectra of four metasurface cells with free-form meta-atoms and effective indices of the Bloch mode. (a–c) Reproduced with permission [153]. Copyright 2022, Wiley-VCH.

Assuming that K microstructures with different spectral modulation functions T i λ are designed, the target spectrum f λ is modulated by these elements and then detected by a monochromatic detector separately to obtain K measurements:

(28) I i = f λ T i λ d λ + N i , i = 1,2 , , K ,

where N i is the noise. The measurement model can also be expressed as

(29) y = T f + n ,

where T R L × L denotes the full spectral modulation function, f is the spectrum to be solved, and y is the measured value. Typically, L′ ≪ L, then Eq. (29) becomes a compressed measurement model and the original spectral can be reconstructed using a compressive reconstruction algorithm.

Wang et al. [151] presented an on-chip spectral sensor based on photonic crystal arrays in 2019, and demonstrated its capability for snapshot spectral imaging. By changing the period, size, and other parameters of the microstructures on the photonic crystals, 36 different transmission spectra were obtained, and the reconstructed spectra had a resolution of 1 nm in the range of 550–750 nm.

In 2020, Wu et al. [154] proposed a broadband random MSFA based on surface plasmonic exciton (SPP) with nine modes, where each filter consists of square holes in a square array of dots composed of a 100 nm-thick aluminum film. In 2022, Wu et al. [155] proposed a random broadband filter array based on all-dielectric gratings. In the random broadband filter array proposed by Hu et al. [152], the metasurface repeating unit of each pixel has a different shape, which in turn gives a different spectral response.

Cui and Huang et al. designed a metasurface broadband filter array with even more diverse structures [153], [156], which was subsequently developed as a real-time hyperspectral imaging chip using a deep learning-based algorithm for reconstruction, with a spectral resolution of 0.8 nm in the 450–750 nm range (0.5 nm in Ref [153]) and a spatial resolution of 356 × 436 [143]. The team’s accomplishments have reported high-precision spectral reconstruction in application scenarios such as mouse brain blood flow detection [156], face recognition [157], and autonomous driving [143].

4.2.3 Improved phase-coded spectrometer

Metasurface devices can be used to replace traditional DOE elements such as conventional 2D gratings and diffractive lenses.

In 2023, Zhang et al. [158] utilized the powerful phase modulation capability of metasurfaces to design a metalens with wavelength-dependent PSFs whose phase profile consists of a fixed hyperbolic phase and an optimizable polynomial phase. Snapshot spectral imaging was achieved through the joint optimization of device design and reconstruction algorithm, demonstrating the great advantages of the metasurface over conventional diffraction spectral imaging. Zhou et al. [159] proposed an all-dielectric metamaterial array for replacing the computer-generated holographic (CGH) in order to make the CTIS more compact, which consists of two-dimensional TiO2 nanopillar arrays aligned on a SiO2 substrate. The results show that the optimized design can achieve diffraction projection of five diffraction orders while ensuring the uniformity of the spot.

5 Conclusions and perspectives

Snapshot Spectral Imaging (SSI) has made great progress since its introduction. This paper reviews the research progress of SSI techniques, including spatial-spectral mapping imaging techniques of integral field type and spatial replication type; amplitude modulation, wavelength modulation, phase modulation, and joint modulation techniques in coded reconstruction snapshot spectrometers; and the latest techniques involving device substitution or system integration using metasurfaces.

Compared with scanning or tunable spectral imaging methods that suffer from the “time-multiplexing dilemma,” SSI techniques offer unparalleled advantages in capturing dynamic scenes and hold promise for high spatial and spectral resolution imaging as optical fabrication and intelligent algorithms evolve.

Table 1 compares the performance parameters of each representative technology, including core devices, band range, spectral-spatial resolution, size, and spectral reconstruction speed. Obviously, while the existing array of technological solutions can cover almost all spectral ranges, achieve high-resolution imaging, and some systems can even achieve on-chip integration, the realization of systems that combines all these advantages remains challenging and has yet to emerge. It still faces the trade-offs between manufacturing difficulty, system complexity, imaging performance, and computational burden.

Table 1:

Comparison of different snapshot spectral imaging technologies.

Type Device Spectral range/nm Spectral resolution/nm Spatial resolution/px Size Reconstruction speed/fps
Spatial-spectral mapping spectral imaging Slicer mirrors [161]: 520–660 5.6 100 × 100 Table-top N/A
[162]: 450–650 3.3 285 × 285 Table-top N/A
[13]: 480–1000 0.26 300 × 300 Room-sized N/A
Lenslet array [163]: 400–760 0.74 268 × 76 Table-top N/A
[24]: 500–650 0.82–4.17 35 × 35–40 × 40 Table-top N/A
Optical fiber bundle [30]: 450–750 4.9 188 × 170 Table-top N/A
[164]: 515–570 2.75 350 × 350 Table-top N/A
[165]: 400–1050 3.2 ∼63 × 63 Table-top N/A
[165]: 950–5000 8 22 × 22 Table-top N/A
Kösters prism [39]: 800–1700 225 Table-top N/A
Linear variable filter [45]: 380–850 5.8 400 × 400 Hand-sized N/A
Coded reconfiguration spectral imaging Coding mask [166]: 455–650 5.9 256 × 248 Table-top 30
[167]: 3770–4800 10 640 × 480 Table-top 50
[168]: 7,700–14,000 94 640 × 512 Table-top 50
Filter array [169]: 406–688 25 ∼1008 × 759 On-chip
[142]: 450–650 10 640 × 480; 1920 × 1080 On-chip 32.3; 7.14
[170]: 1500–1800 16 80 × 60 On-chip
Diffractive optical element [88]: 470–740 5 203 × 203 Table-top
[100]: 420–650 9.2 1440 × 960 Hand-sized 7.8
Metasurface-based spectral imaging Dispersive metalens [138]: 400–667 4 3600 × 3600 On-chip
Off-axis focusing metamirror [141]: 480–650 9.4 On-chip
Metasurface filters array [156]: 450–750 0.8 356 × 436 On-chip >30
Parallel metasystems [140]: 795–980 9.25 ∼240 × 240 On-chip

For spatial-spectral mapping spectrometers, which rely heavily on innovative optical designs for one-to-one mapping between spectral cube voxels and detector pixels, there is an inherent trade-off between spectral and spatial resolution, further limited by the number of detector pixels. The tunable system in [24] is a typical case where an increase in spectral resolution implies a compromise in spatial resolution. As mentioned earlier, the resolution and bandwidth of an integral field spectrometers are directly proportional to the effective dispersion optical path of the system, e.g., telescope systems in astronomical observations typically have meter-scale volumes and require cascading multiple detectors to obtain sub-nanometer spectral resolution. Spatial replication imaging spectrometers typically based on beam-splitting prisms or lenslet, although relatively compact, have a spectral resolution determined by the number of cascades of beam-splitting devices or the number of arrays of lenslet, and have difficulty in achieving high-resolution imaging. Despite these limitations, they have the significant advantage of acquiring spectral cube data of the target scene without computation, and “what you see is what you get” ensures the reliability and stability of their imaging, allowing them to be used in astronomical and microscopic imaging.

With the development of computational imaging, coded reconstruction spectrometers are expected to address the limitations of spatial-spectral mapping type SSI techniques, which are constrained by the number of detector pixels. These systems employ innovative optical devices, such as coded masks, diffractive devices, and pixel-level filter pieces, to achieve optical field modulation. These devices often require photolithography or coating processes to attain patterned micro-nano-structures or multilayer film system structures. With the emergence of technologies like laser beam direct writing and nanoimprinting, high-volume, low-cost device fabrication becomes feasible. Currently, the most mature wavelength-encoded spectrometers have been commercialized in the visible near-infrared band, enabling multi-channel, high-resolution spectral snapshot spectral imaging. However, a significant challenge faced by computational spectrometers is the requirement for an accurate system measurement matrix, especially for amplitude-encoded and phase-encoded types. This means that multiple multi-band spectral calibrations must be performed to model the imaging forward, which directly affects the upper limit of the imaging resolution of the system. In addition, the computational load is another challenge they face. The multiplexing of spatial-spectral information improves the sampling efficiency on the one hand, but it also leads to further ill-conditioning of the reconstruction model, which directly affects the quality of the reconstructed data cube; and it is difficult to apply the reconstruction model to refined observational scenarios, considering that the reconstruction model has a non-unique solution, i.e., theoretically it is difficult to make the spectral reconstruction exactly the same every time. Certainly, the development of deep learning technology enables training based on extensive data, incorporating prior knowledge into the spectral reconstruction process. This represents a crucial pathway for further enhancing the high-precision and high-resolution reconstruction of spectra.

One of the most exciting developments in SSI is the introduction and integration of metasurfaces. These techniques provide a wide range of possibilities for manipulating optical properties at sub-wavelength scales, thereby expanding the degrees of freedom to achieve optimal data acquisition conditions. Differing from traditional dispersion devices that trade increased propagation distance for higher resolution and broader bandwidth, metasurface devices have the capability to achieve efficient wide-angle super-dispersion [135], [136], [137]. This enhances the upper limit of spectral resolution. Furthermore, it significantly reduces the required dispersion optical path, thereby minimizing system dimensions. The previously mentioned off-axis dispersion-concentrating meta-lens serves as a compelling demonstration [138], [141]. Including commercially available spectral-filtering metasurfaces [160], multi-functional metasurfaces present an opportunity for creating more compact, efficient, and versatile SSI systems. Currently, the primary challenge faced by metasurfaces lies in manufacturing difficulties. For instance, the dispersive performance of a phase-dispersion meta-lens is directly proportional to its structural height, where a greater height implies a more complex manufacturing process. However, with the advancements in micro-nano manufacturing processes, the realization of large-area, high aspect ratio, and mass production techniques becomes plausible. In recent years, the explosion in computational capabilities further empowers metasurface snapshot spectral imaging. This not only significantly enhances the high-resolution imaging capabilities of metasurfaces but also facilitates the reverse design of superior metasurface structures.

In addition, some new imaging devices and techniques are also expected to address the challenges faced by SSI. Examples include perovskites or quantum dot detectors [171], [172], [173], which have the potential to resolve the resolution trade-off problem inherent in existing spectral imaging techniques due to the detector’s structural or characteristic attributes. Another avenue of exploration involves studies employing dual-frequency combs as active light sources [174], integrating ultrafast lasers [175] and multiplexing techniques to achieve temporal and spectral high-resolution SSI [176]. As for metasurface devices, some flexible or tunable metasurfaces [177], [178], [179] also opens avenues for multifunctional or high-resolution SSI. These innovations represent promising directions for advancing snapshot spectral imaging.

In summary, snapshot spectral imaging remains an evolving technology, although getting the best of both spectral resolution and spatial imaging performance is challenging. In the future, snapshot spectroscopic imaging will continue to push the limits of higher imaging dimensions [180], [181], [182], faster imaging speeds [32], [183], and higher imaging resolutions [184]. Of course, this will require novel optics, innovative optical engineering, and efficient reconstruction algorithms to achieve.


Corresponding authors: Kai Ni and Qian Zhou, Division of Advanced Manufacturing, Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China, E-mail: (K. Ni), (Q. Zhou) (Q. Zhou); and Benfeng Bai, State Key Laboratory of Precision Measurement Technology and Instruments, Department of Precision Instrument, Tsinghua University, Beijing, China, E-mail: (B. Bai)

Award Identifier / Grant number: JCYJ20200109143006048

Award Identifier / Grant number: JCYJ20210324115813037

Award Identifier / Grant number: 62175121

Award Identifier / Grant number: 62275140

Funding source: Shenzhen Science and Technology Program

Award Identifier / Grant number: WDZC20231127112247001

  1. Research funding: This work was supported by National Natural Science Foundation of China (62175121, 62275140); Shenzhen Fundamental Research Funding (JCYJ20200109143006048, JCYJ20210324115813037); Shenzhen Science and Technology Program (WDZC20231127112247001).

  2. Author contributions: All authors have accepted responsibility for the entire content of this manuscript and approved its submission.

  3. Conflict of interest: Authors state no conflicts of interest.

  4. Informed consent: Informed consent was obtained from all individuals included in this study.

  5. Ethical approval: The conducted research is not related to either human or animals use.

  6. Data availability: Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

References

[1] M. C. Kriegmair, et al.., “Multiparametric cystoscopy for detection of bladder cancer using real-time multispectral imaging,” Eur. Urol., vol. 77, no. 2, pp. 251–259, 2020. https://doi.org/10.1016/j.eururo.2019.08.024.Search in Google Scholar PubMed

[2] S. Junaid, et al.., “Video-rate, mid-infrared hyperspectral upconversion imaging,” Optica, vol. 6, no. 6, pp. 702–708, 2019. https://doi.org/10.1364/OPTICA.6.000702.Search in Google Scholar

[3] T. W. Liu, S. T. Gammon, and D. Piwnica-Worms, “Multi-modal multi-spectral intravital microscopic imaging of signaling dynamics in real-time during tumor-immune interactions,” Cells, vol. 10, no. 3, p. 499, 2021. https://doi.org/10.3390/cells10030499.Search in Google Scholar PubMed PubMed Central

[4] P. N. Hedde, R. Cinco, L. Malacrida, A. Kamaid, and E. Gratton, “Phasor-based hyperspectral snapshot microscopy allows fast imaging of live, three-dimensional tissues for biomedical applications,” Commun. Biol., vol. 4, no. 1, 2021, Art. no. 721. https://doi.org/10.1038/s42003-021-02266-z.Search in Google Scholar PubMed PubMed Central

[5] N. Hagen, “Survey of autonomous gas leak detection and quantification with snapshot infrared spectral imaging,” J. Opt., vol. 22, no. 10, 2020, Art. no. 103001. https://doi.org/10.1088/2040-8986/abb1cf.Search in Google Scholar

[6] A. Dolet, et al.., “Gas characterisation based on a snapshot interferometric imaging spectrometer,” in Image and Signal Processing for Remote Sensing XXV, Strasbourg, France, 2019, p. 1115502.10.1117/12.2533338Search in Google Scholar

[7] Z. He, N. Williamson, C. D. Smith, M. Gragston, and Z. Zhang, “Compressed single-shot hyperspectral imaging for combustion diagnostics,” Appl. Opt., vol. 59, no. 17, pp. 5226–5233, 2020. https://doi.org/10.1364/AO.390335.Search in Google Scholar PubMed

[8] M. Si, Q. Cheng, L. Yuan, Z. Luo, W. Yan, and H. Zhou, “Study on the combustion behavior and soot formation of single coal particle using hyperspectral imaging technique,” Combust. Flame, vol. 233, no. 1, p. 111568, 2021. https://doi.org/10.1016/j.combustflame.2021.111568.Search in Google Scholar

[9] D. S. Zheng, et al.., “Real time monitoring vapor fluctuations through snapshot imaging by short wave IR TuLIPSS,” in Conference on Infrared Remote Sensing and Instrumentation XXX, San Diego, CA, 2022.10.1117/12.2633583Search in Google Scholar

[10] Q. L. Li, X. F. He, Y. T. Wang, H. Y. Liu, D. R. Xu, and F. M. Guo, “Review of spectral imaging technology in biomedical engineering: achievements and challenges,” J. Biomed. Opt., vol. 18, no. 10, 2013, Art. no. 100901. https://doi.org/10.1117/1.Jbo.18.10.100901.Search in Google Scholar

[11] R. Bacon, et al.., “3D spectrography at high-spatial-resolution .1. concept and realization of the integral field spectrograph tiger,” Astron. Astrophys., Suppl., vol. 113, no. 2, pp. 347–357, 1995.Search in Google Scholar

[12] I. Montilla, E. Pécontal, J. Devriendt, and R. Bacon, “Integral field unit spectrograph for extremely large telescopes,” Publ. Astron. Soc. Pac., vol. 120, no. 868, p. 634, 2008. https://doi.org/10.1086/589517.Search in Google Scholar

[13] F. Henault, et al.., “MUSE: a second-generation integral-field spectrograph for the VLT,” in Proc. SPIE 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, Hawai’i, US, SPIE, 2002.10.1117/12.462334Search in Google Scholar

[14] T. Chabot, D. Brousseau, and S. Thibault, “Stray light analysis of diamond-turned image slicers,” Opt. Eng., vol. 62, no. 2, 2023, Art. no. 025102. https://doi.org/10.1117/1.Oe.62.2.025102.Search in Google Scholar

[15] Y. Zhang, Z. Zhang, H. Yang, Y. Zhang, Z. Huang, and G. Jin, “Broadband aberration-corrected snapshot spectrometer with a toroidal slicer mirror,” Appl. Opt., vol. 58, no. 4, pp. 826–832, 2019. https://doi.org/10.1364/AO.58.000826.Search in Google Scholar PubMed

[16] Y. Zhang, D. Xu, G. Liu, and H. Yang, “Snapshot spectroscopic microscopy with double spherical slicer mirrors,” Appl. Opt., vol. 60, no. 3, pp. 745–752, 2021. https://doi.org/10.1364/AO.409135.Search in Google Scholar PubMed

[17] R. Content, “Advanced image slicers for integral field spectroscopy with UKIRT and GEMINI,” in SPIE Conference on Infrared Astronomical Instrumentation, Kona, Hi, 1998.10.1117/12.317262Search in Google Scholar

[18] R. Content, “Transparent microslices IFUs: from 200,000 to 5 millions spectra at once,” New Astron. Rev., vol. 50, nos. 4–5, pp. 267–270, 2006. https://doi.org/10.1016/j.newar.2006.03.005.Search in Google Scholar

[19] B. Andrew, A. Sheinis, A. Norton, J. Daly, S. Beaven, and J. Weinheimer, “Snapshot hyperspectral imaging: the hyperpixel array camera,” in Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV, 2009.Search in Google Scholar

[20] A. Quirrenbach, et al.., “OSIRIS: AO-assisted integral-field spectroscopy at the Keck Observatory,” New Astron. Rev., vol. 49, nos. 10–12, pp. 639–646, 2006. https://doi.org/10.1016/j.newar.2005.10.018.Search in Google Scholar

[21] S. G. Wolff, et al.., “Gemini Planet imager observational calibrations IV: wavelength calibration and flexure correction for the integral field spectrograph,” in 5th Conference on Ground-Based and Airborne Instrumentation for Astronomy, Montreal, Canada, 2014.10.1117/12.2055678Search in Google Scholar

[22] M. M. Anna, et al.., “The infrared imaging spectrograph (IRIS) for TMT: spectrograph design,” in Ground-based and Airborne Instrumentation for Astronomy III, 2010.Search in Google Scholar

[23] J. G. Dwight and T. S. Tkaczyk, “Lenslet array tunable snapshot imaging spectrometer (LATIS) for hyperspectral fluorescence microscopy,” Biomed. Opt. Express, vol. 8, no. 3, pp. 1950–1964, 2017. https://doi.org/10.1364/BOE.8.001950.Search in Google Scholar PubMed PubMed Central

[24] Y. Ji, et al.., “Spatial-spectral resolution tunable snapshot imaging spectrometer: analytical design and implementation,” Appl. Opt., vol. 62, no. 17, pp. 4456–4464, 2023. https://doi.org/10.1364/AO.488558.Search in Google Scholar PubMed

[25] Q. S. Xue, H. X. Bai, F. Q. Lu, J. Yang, and H. LI, “Development of snapshot hyperspectral lmager based on microlens array,” Acta Photonica Sin., vol. 52, no. 05, pp. 309–322, 2023. https://doi.org/10.3788/gzxb20235205.0552223.Search in Google Scholar

[26] T. S. Tkaczy, et al.., “Lightguide, integral field snapshot imaging spectrometer for environmental imaging and earth observations,” in IGARSS 2020 – 2020 IEEE International Geoscience and Remote Sensing Symposium, 2020.10.1109/IGARSS39084.2020.9324349Search in Google Scholar

[27] M. M. Roth, et al.., “The ERA2 facility: towards application of a fiber-based astronomical spectrograph for imaging spectroscopy in life sciences,” in Conference on Modern Technologies in Space-and Ground-Based Telescopes and Instrumentation II, Amsterdam, Netherlands, 2012.10.1117/12.925340Search in Google Scholar

[28] H. T. Lim and V. M. Murukeshan, “A four-dimensional snapshot hyperspectral video-endoscope for bio-imaging applications,” Sci. Rep., vol. 6, no. 1, 2016, Art. no. 24044. https://doi.org/10.1038/srep24044.Search in Google Scholar PubMed PubMed Central

[29] M. H. Tran and B. W. Fei, “Compact and ultracompact spectral imagers: technology and applications in biomedical imaging,” J. Biomed. Opt., vol. 28, no. 4, 2023, Art. no. 040901. https://doi.org/10.1117/1.Jbo.28.4.040901.Search in Google Scholar PubMed PubMed Central

[30] Y. Wang, et al.., “Light-guide snapshot imaging spectrometer for remote sensing applications,” Opt. Express, vol. 27, no. 11, pp. 15701–15725, 2019. https://doi.org/10.1364/OE.27.015701.Search in Google Scholar PubMed

[31] R. French, S. Gigan, and O. l. Muskens, “Snapshot fiber spectral imaging using speckle correlations and compressive sensing,” Opt. Express, vol. 26, no. 24, pp. 32302–32316, 2018. https://doi.org/10.1364/OE.26.032302.Search in Google Scholar PubMed

[32] H. Nemoto, T. Suzuki, and F. Kannari, “Single-shot ultrafast burst imaging using an integral field spectroscope with a microlens array,” Opt. Lett., vol. 45, no. 18, pp. 5004–5007, 2020. https://doi.org/10.1364/OL.398036.Search in Google Scholar PubMed

[33] J. Lu, X. W. Ng, D. Piston, and T. S. Tkaczyk, “Fabrication of a multifaceted mapping mirror using two-photon polymerization for a snapshot image mapping spectrometer,” Appl. Opt., vol. 62, no. 20, pp. 5416–5426, 2023. https://doi.org/10.1364/AO.495466.Search in Google Scholar PubMed

[34] D. Howett, Television Innovations: 50 Technological Developments: A Personal Selection, UK, Kelly Publications, 2006.Search in Google Scholar

[35] N. Hagen, L. Gao, T. Tkaczyk, and R. Kester, “Snapshot advantage: a review of the light collection improvement for parallel high-dimensional measurement systems,” Opt. Eng., vol. 51, no. 11, 2012, Art. no. 111702. https://doi.org/10.1117/1.OE.51.11.111702.Search in Google Scholar PubMed PubMed Central

[36] D. Lang Hendrik and G. Bouwhuis, “Optical system for a color television camera,” U.S. Patent 3202039 Patent Appl. US120039A, 1965-08-24, 1965.Search in Google Scholar

[37] Y. Murakami, M. Yamaguchi, and N. Ohyama, “Hybrid-resolution multispectral imaging using color filter array,” Opt. Express, vol. 20, no. 7, pp. 7173–7183, 2012. https://doi.org/10.1364/OE.20.007173.Search in Google Scholar PubMed

[38] J. Greiner and U. Laux, “A novel compact 4-channel beam splitter based on a Kösters-type prism,” CEAS Space J., vol. 14, no. 2, pp. 253–260, 2022. https://doi.org/10.1007/s12567-021-00418-9.Search in Google Scholar

[39] C. Rothhardt, et al.., “Technical layout and fabrication of a compact all-glass four-channel beam splitter based on a Kösters design,” CEAS Space J., vol. 14, no. 2, pp. 287–301, 2022. https://doi.org/10.1007/s12567-022-00440-5.Search in Google Scholar

[40] A. R. Harvey and D. W. Fletcher-Holmes, “High-throughput snapshot spectral imaging in two dimensions,” in Conference on Spectral Imaging – Instrumentation, Applications and Analysis II, San Jose, Ca, 2003.10.1117/12.485557Search in Google Scholar

[41] A. Gorman, D. W. Fletcher-Holmes, and A. R. Harvey, “Generalization of the Lyot filter and its application to snapshot spectral imaging,” Opt. Express, vol. 18, no. 6, pp. 5602–5608, 2010. https://doi.org/10.1364/OE.18.005602.Search in Google Scholar PubMed

[42] G. Wong, R. Pilkington, and A. R. Harvey, “Achromatization of Wollaston polarizing beam splitters,” Opt. Lett., vol. 36, no. 8, pp. 1332–1334, 2011. https://doi.org/10.1364/OL.36.001332.Search in Google Scholar PubMed

[43] M. W. Kudenov, M. Miskiewicz, N. Sanders, and M. J. Escuti, “Achromatic Wollaston prism beam splitter using polarization gratings,” Opt. Lett., vol. 41, no. 19, pp. 4461–4463, 2016. https://doi.org/10.1364/OL.41.004461.Search in Google Scholar PubMed

[44] M. W. Kudenov and E. L. Dereniak, “Compact real-time birefringent imaging spectrometer,” Opt. Express, vol. 20, no. 16, pp. 17973–17986, 2012. https://doi.org/10.1364/OE.20.017973.Search in Google Scholar PubMed

[45] T. K. Mu, F. Han, D. H. Bao, C. M. Zhang, and R. G. Liang, “Compact snapshot optically replicating and remapping imaging spectrometer (ORRIS) using a focal plane continuous variable filter,” Opt. Lett., vol. 44, no. 5, pp. 1281–1284, 2019. https://doi.org/10.1364/ol.44.001281.Search in Google Scholar

[46] A. Hirai, T. Inoue, K. Itoh, and Y. Ichioka, “Application of multiple-image Fourier transform spectral imaging to measurement of fast phenomena,” Opt. Rev., vol. 1, no. 2, pp. 205–207, 1994. https://doi.org/10.1007/bf03254863.Search in Google Scholar

[47] M. Hubold, R. Berlich, C. Gassner, R. Brüning, and R. Brunner, “Ultra-compact micro-optical system for multispectral imaging,” in Proc. SPIE 10545, MOEMS and Miniaturized Systems XVII, 105450V, San Francisco, California, US, SPIE, 2018.10.1117/12.2295343Search in Google Scholar

[48] M. West, J. Grossmann, and C. Galvan, “Commercial snapshot spectral imaging: the art of the possible,” The MITRE Corporation, McLean, VA, USA, Tech. Rep. MTR180488, 2018.Search in Google Scholar

[49] M. E. Gehm, R. John, D. J. Brady, R. M. Willett, and T. J. Schulz, “Single-shot compressive spectral imaging with a dual-disperser architecture,” Opt. Express, vol. 15, no. 21, p. 14013, 2007. https://doi.org/10.1364/OE.15.014013.Search in Google Scholar PubMed

[50] D. J. Brady and M. E. Gehm, “Compressive imaging spectrometers using coded apertures,” in Defense and Security Symposium, 2006.10.1117/12.667605Search in Google Scholar

[51] Z. Zhao, Z. Meng, Z. Ju, Z. Yu, and K. Xu, “A compact dual-dispersion architecture for snapshot compressive spectral imaging,” in 2021 Asia Communications and Photonics Conference (ACP), 2021.10.1364/ACPC.2021.T4A.269Search in Google Scholar

[52] Z. Yu, et al.., “Deep learning enabled reflective coded aperture snapshot spectral imaging,” Opt. Express, vol. 30, no. 26, p. 46822, 2022. https://doi.org/10.1364/OE.475129.Search in Google Scholar PubMed

[53] A. Wagadarikar, R. John, R. Willett, and D. Brady, “Single disperser design for coded aperture snapshot spectral imaging,” Appl. Opt., vol. 47, no. 10, pp. B44–B51, 2008. https://doi.org/10.1364/AO.47.000B44.Search in Google Scholar PubMed

[54] X. Cao, H. Du, X. Tong, Q. Dai, and S. Lin, “A prism-mask system for multispectral video acquisition,” IEEE Trans. Pattern Anal., vol. 33, no. 12, pp. 2423–2435, 2011. https://doi.org/10.1109/TPAMI.2011.80.Search in Google Scholar PubMed

[55] C. H. F. Rueda, G. A. R. Calderon, and H. A. Fuentes, “Spectral selectivity in compressive spectral imaging based on grayscale coded apertures,” in 2013 XVIII Symposium of Image, Signal Processing, and Artificial Vision (STSIVA), 2013.10.1109/STSIVA.2013.6644929Search in Google Scholar

[56] N. Diaz, H. Rueda, and H. Arguello, “High-dynamic range compressive spectral imaging by adaptive filtering,” in 2015 3rd International Workshop on Compressed Sensing Theory and its Applications to Radar, Sonar and Remote Sensing (CoSeRa), 2015.10.1109/CoSeRa.2015.7330270Search in Google Scholar

[57] N. Diaz, C. Hinojosa, and H. Arguello, “Adaptive grayscale compressive spectral imaging using optimal blue noise coding patterns,” Opt. Laser Technol., vol. 117, no. 1, pp. 147–157, 2019. https://doi.org/10.1016/j.optlastec.2019.03.038.Search in Google Scholar

[58] X. Lin, G. Wetzstein, Y. Liu, and Q. Dai, “Dual-coded compressive hyperspectral imaging,” Opt. Lett., vol. 39, no. 7, pp. 2044–2047, 2014. https://doi.org/10.1364/OL.39.002044.Search in Google Scholar PubMed

[59] X. Lin, Y. Liu, J. Wu, and Q. Dai, “Spatial-spectral encoded compressive hyperspectral imaging,” ACM Trans. Graph., vol. 33, no. 6, pp. 1–11, 2014. https://doi.org/10.1145/2661229.2661262.Search in Google Scholar

[60] H. F. Rueda, A. Parada, and H. Arguello, “Spectral resolution enhancement of hyperspectral imagery by a multiple-aperture compressive optical imaging system,” Ing. Investig., vol. 34, no. 3, pp. 50–55, 2014. https://doi.org/10.15446/ing.investig.v34n3.41675.Search in Google Scholar

[61] B. E. Bayer, “Color imaging array,” U.S. Patent 3971065, 1976-07-20, 1976.Search in Google Scholar

[62] J. Adams, K. Parulski, and K. Spaulding, “Color processing in digital cameras,” IEEE Micro, vol. 18, no. 6, pp. 20–30, 1998. https://doi.org/10.1109/40.743681.Search in Google Scholar

[63] B. Arad, et al.., “NTIRE 2022 spectral demosaicing challenge and data set,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2022.10.1109/CVPRW56347.2022.00103Search in Google Scholar

[64] P.-J. Lapray, X. Wang, J.-B. Thomas, and P. Gouton, “Multispectral filter arrays: recent advances and practical implementation,” Sensors, vol. 14, no. 11, pp. 21626–21659, 2014. https://doi.org/10.3390/s141121626.Search in Google Scholar PubMed PubMed Central

[65] L. Miao, H. Qi, R. Ramanath, and W. E. Snyder, “Binary tree-based generic demosaicking algorithm for multispectral filter arrays,” IEEE Trans. Image Process., vol. 15, no. 11, pp. 3550–3558, 2006. https://doi.org/10.1109/TIP.2006.877476.Search in Google Scholar PubMed

[66] N. Hagen and M. W. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., vol. 52, no. 9, p. 090901, 2013. https://doi.org/10.1117/1.Oe.52.9.090901.Search in Google Scholar

[67] S. Mihoubi, O. Losson, B. Mathon, and L. Macaire, “Multispectral demosaicing using pseudo-panchromatic image,” IEEE Trans. Comput. Imaging, vol. 3, no. 4, pp. 982–995, 2017. https://doi.org/10.1109/tci.2017.2691553.Search in Google Scholar

[68] S. Mihoubi, “Snapshot multispectral image demosaicing and classification,” Ph.D. dissertation, Lille, France, CRIStAL Laboratory, Université de Lille, 2018.Search in Google Scholar

[69] R. Ramanath, W. E. Snyder, and H. Qi, “Mosaic multispectral focal plane array cameras,” in Infrared Technology and Applications XXX, 2004.10.1117/12.543418Search in Google Scholar

[70] H. K. Aggarwal and A. Majumdar, “Compressive sensing multi-spectral demosaicing from single sensor architecture,” in 2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP), 2014.10.1109/ChinaSIP.2014.6889259Search in Google Scholar

[71] Z. Xuan, et al.., “On-chip short-wave infrared multispectral detector based on integrated Fabry–Perot microcavities array,” Chin. Opt. Lett., vol. 20, no. 6, p. 061302, 2022. https://doi.org/10.1364/COL.20.061302.Search in Google Scholar

[72] G. Minas, J. C. Ribeiro, J. S. Martins, R. F. Wolffenbuttel, and J. H. Correia, “An array of Fabry-Perot optical-channels for biological fluids analysis,” Sens. Actuators, A, vol. 115, no. 2, pp. 362–367, 2004. https://doi.org/10.1016/j.sna.2004.03.077.Search in Google Scholar

[73] S. Wang, “Research on novel optical and electrical functional materials,” Ph.D. dissertation, Shanghai, China, Shanghai Institute of Technical Physics, Chinese Academy of Sciences, 2003.Search in Google Scholar

[74] S.-W. Wang, X. Chen, W. Lu, L. Wang, Y. Wu, and Z. Wang, “Integrated optical filter arrays fabricated by using the combinatorial etching technique,” Opt. Lett., vol. 31, no. 3, pp. 332–334, 2006. https://doi.org/10.1364/OL.31.000332.Search in Google Scholar PubMed

[75] S.-W. Wang, et al.., “Concept of a high-resolution miniature spectrometer using an integrated filter array,” Opt. Lett., vol. 32, no. 6, pp. 632–634, 2007. https://doi.org/10.1364/OL.32.000632.Search in Google Scholar PubMed

[76] S.-J. Kim, P.-H. Jung, W. Kim, H. Lee, and S.-H. Hong, “Generation of highly integrated multiple vivid colours using a three-dimensional broadband perfect absorber,” Sci. Rep., vol. 9, no. 1, p. 14859, 2019. https://doi.org/10.1038/s41598-019-49906-3.Search in Google Scholar PubMed PubMed Central

[77] X. Li, Z. J. Tan, and N. X. Fang, “Grayscale stencil lithography for patterning multispectral color filters,” Optica, vol. 7, no. 9, pp. 1154–1161, 2020. https://doi.org/10.1364/optica.389425.Search in Google Scholar

[78] Y. Yang, T. Badloe, and J. Rho, “Writing nanometer-scale structures for centimeter-scale color printing,” Adv. Photonics, vol. 5, no. 3, p. 030501, 2023. https://doi.org/10.1117/1.AP.5.3.030501.Search in Google Scholar

[79] S. S. Wang and R. Magnusson, “Design of waveguide-grating filters with symmetrical line shapes and low sidebands,” Opt. Lett., vol. 19, no. 12, pp. 919–921, 1994. https://doi.org/10.1364/OL.19.000919.Search in Google Scholar

[80] H. F. Ghaemi, T. Thio, D. E. Grupp, T. W. Ebbesen, and H. J. Lezec, “Surface plasmons enhance optical transmission through subwavelength holes,” Phys. Rev. B, vol. 58, no. 11, pp. 6779–6782, 1998. https://doi.org/10.1103/PhysRevB.58.6779.Search in Google Scholar

[81] E. Ozbay, “Plasmonics: merging photonics and electronics at nanoscale dimensions,” Science, vol. 311, no. 5758, pp. 189–193, 2006. https://doi.org/10.1126/science.1114849.Search in Google Scholar PubMed

[82] T. Xu, Y.-K. Wu, X. Luo, and L. J. Guo, “Plasmonic nanoresonators for high-resolution colour filtering and spectral imaging,” Nat. Commun., vol. 1, no. 1, p. 59, 2010. https://doi.org/10.1038/ncomms1058.Search in Google Scholar PubMed

[83] R. Haïdar, et al.., “Free-standing subwavelength metallic gratings for snapshot multispectral imaging,” Appl. Phys. Lett., vol. 96, no. 22, p. 221104, 2010. https://doi.org/10.1063/1.3442487.Search in Google Scholar

[84] Q. Chen, et al.., “A CMOS image sensor integrated with plasmonic colour filters,” Plasmonics, vol. 7, no. 4, pp. 695–699, 2012. https://doi.org/10.1007/s11468-012-9360-6.Search in Google Scholar

[85] A. Shaukat, F. Noble, and K. M. Arif, “Nanostructured color filters: a review of recent developments,” Nanomaterials, vol. 10, no. 8, p. 1554, 2020. https://doi.org/10.3390/nano10081554.Search in Google Scholar PubMed PubMed Central

[86] T. Okamoto and I. Yamaguchi, “Simultaneous acquisition of spectral image information,” Opt. Lett., vol. 16, no. 16, pp. 1277–1279, 1991. https://doi.org/10.1364/OL.16.001277.Search in Google Scholar PubMed

[87] W. R. Johnson, D. W. Wilson, W. Fink, M. Humayun, and G. Bearman, “Snapshot hyperspectral imaging in ophthalmology,” J. Biomed. Opt., vol. 12, no. 1, pp. 014036–014036-7, 2007. https://doi.org/10.1117/1.2434950.Search in Google Scholar PubMed

[88] B. K. Ford, M. R. Descour, and R. M. Lynch, “Large-image-format computed tomography imaging spectrometer for fluorescence microscopy,” Opt. Express, vol. 9, no. 9, pp. 444–453, 2001. https://doi.org/10.1364/oe.9.000444.Search in Google Scholar PubMed

[89] E. K. Hege, D. O’Connell, W. Johnson, S. Basty, and E. L. Dereniak, “Hyperspectral imaging for astronomy and space surveillance,” in Imaging Spectrometry IX, 2004.10.1117/12.506426Search in Google Scholar

[90] L. Wu and W. Cai, “CTIS-GAN: computed tomography imaging spectrometry based on a generative adversarial network,” Appl. Opt., vol. 62, no. 10, pp. 2422–2433, 2023. https://doi.org/10.1364/AO.478230.Search in Google Scholar PubMed

[91] C. Douarre, C. F. Crispim-Junior, A. Gelibert, G. Germain, L. Tougne, and D. Rousseau, “CTIS-net: a neural network architecture for compressed learning based on computed tomography imaging spectrometers,” IEEE Trans. Comput. Imaging, vol. 7, no. 1, pp. 572–583, 2021. https://doi.org/10.1109/TCI.2021.3083215.Search in Google Scholar

[92] W. K. Michael, M. C.-J. Julia, J. V. Corrie, L. D. Eustace, and W. A. Riley, “Faceted grating prism for a computed tomographic imaging spectrometer,” Opt. Eng., vol. 51, no. 4, p. 044002, 2012. https://doi.org/10.1117/1.OE.51.4.044002.Search in Google Scholar

[93] C. Volin, M. Descour, and E. Dereniak, “Design of broadband-optimized computer-generated hologram dispersers for the computed-tomography imaging spectrometer,” in Proc. SPIE 4480, Imaging Spectrometry VII, San Diego, CA, US, SPIE, 2002.10.1117/12.453361Search in Google Scholar

[94] M. A. Golub, et al.., “Compressed sensing snapshot spectral imaging by a regular digital camera with an added optical diffuser,” Appl. Opt., vol. 55, no. 3, pp. 432–443, 2016. https://doi.org/10.1364/AO.55.000432.Search in Google Scholar PubMed

[95] J. Hauser, A. Zeligman, A. Averbuch, V. A. Zheludev, and M. Nathan, “DD-Net: spectral imaging from a monochromatic dispersed and diffused snapshot,” Appl. Opt., vol. 59, no. 36, pp. 11196–11208, 2020. https://doi.org/10.1364/AO.404524.Search in Google Scholar PubMed

[96] M. Descour and E. Dereniak, “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results,” Appl. Opt., vol. 34, no. 22, pp. 4817–4826, 1995. https://doi.org/10.1364/AO.34.004817.Search in Google Scholar PubMed

[97] N. Hagen and E. L. Dereniak, “Analysis of computed tomographic imaging spectrometers. I. Spatial and spectral resolution,” Appl. Opt., vol. 47, no. 28, pp. F85–F95, 2008. https://doi.org/10.1364/AO.47.000F85.Search in Google Scholar

[98] P. Wang and R. Menon, “Ultra-high-sensitivity color imaging via a transparent diffractive-filter array and computational optics,” Optica, vol. 2, no. 11, pp. 933–939, 2015. https://doi.org/10.1364/optica.2.000933.Search in Google Scholar

[99] S. K. Sahoo, D. L. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica, vol. 4, no. 10, pp. 1209–1213, 2017. https://doi.org/10.1364/optica.4.001209.Search in Google Scholar

[100] D. S. Jeon, et al.., “Compact snapshot hyperspectral imaging with diffracted rotation,” ACM Trans. Graph., vol. 38, no. 4, pp. 1–13, 2019. https://doi.org/10.1145/3306346.3322946.Search in Google Scholar

[101] J. W. Goodman, Introduction to Fourier Optics, 4th ed. New York, W.H. Freeman, Macmillan Learning, 2017.Search in Google Scholar

[102] H. Hu, et al.., “Practical snapshot hyperspectral imaging with DOE,” Opt. Lasers Eng., vol. 156, no. 1, p. 107098, 2022. https://doi.org/10.1016/j.optlaseng.2022.107098.Search in Google Scholar

[103] N. Xu, et al.., “Snapshot hyperspectral imaging based on equalization designed DOE,” Opt. Express, vol. 31, no. 12, pp. 20489–20504, 2023. https://doi.org/10.1364/OE.493498.Search in Google Scholar PubMed

[104] S.-H. Baek, et al.., “Single-shot hyperspectral-depth imaging with learned diffractive optics,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021.10.1109/ICCV48922.2021.00265Search in Google Scholar

[105] L. Li, L. Wang, W. Song, L. Zhang, Z. Xiong, and H. Huang, “Quantization-aware deep optics for diffractive snapshot hyperspectral imaging,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022.10.1109/CVPR52688.2022.01916Search in Google Scholar

[106] X. Lin, et al.., “All-optical machine learning using diffractive deep neural networks,” Science, vol. 361, no. 6406, pp. 1004–1008, 2018. https://doi.org/10.1126/science.aat8084.Search in Google Scholar PubMed

[107] J. X. Li, et al.., “Spectrally encoded single-pixel machine vision using diffractive networks,” Sci. Adv., vol. 7, no. 13, 2021, Art. no. eabd7690. https://doi.org/10.1126/sciadv.abd7690.Search in Google Scholar PubMed PubMed Central

[108] D. Mengu, A. Tabassum, M. Jarrahi, and A. Ozcan, “Snapshot multispectral imaging using a diffractive optical network,” Light: Sci. Appl., vol. 12, no. 1, p. 86, 2023. https://doi.org/10.1038/s41377-023-01135-0.Search in Google Scholar PubMed PubMed Central

[109] H. Arguello and G. R. Arce, “Colored coded aperture design by concentration of measure in compressive spectral imaging,” IEEE Trans. Image Process., vol. 23, no. 4, pp. 1896–1908, 2014. https://doi.org/10.1109/TIP.2014.2310125.Search in Google Scholar PubMed

[110] H. Rueda, H. Arguello, and G. R. Arce, “Compressive spectral imaging based on colored coded apertures,” in ICASSP 2014 – 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.10.1109/ICASSP.2014.6855118Search in Google Scholar

[111] C. V. Correa, H. Arguello, and G. R. Arce, “Compressive spectral imaging with colored-patterned detectors,” in 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2014.10.1109/ICASSP.2014.6855116Search in Google Scholar

[112] C. V. Correa, H. Arguello, and G. R. Arce, “Snapshot colored compressive spectral imager,” J. Opt. Soc. Am. A, vol. 32, no. 10, pp. 1754–1763, 2015. https://doi.org/10.1364/JOSAA.32.001754.Search in Google Scholar PubMed

[113] C. V. Correa, C. A. A. Hinojosa, G. R. Arce, and H. A. Sr, “Multiple snapshot colored compressive spectral imager,” Opt. Eng., vol. 56, no. 4, p. 041309, 2016. https://doi.org/10.1117/1.OE.56.4.041309.Search in Google Scholar

[114] H. Rueda, D. Lau, and G. R. Arce, “Multi-spectral compressive snapshot imaging using RGB image sensors,” Opt. Express, vol. 23, no. 9, p. 12207, 2015. https://doi.org/10.1364/OE.23.012207.Search in Google Scholar PubMed

[115] U. Gundogan and F. S. Oktem, “Computational spectral imaging with diffractive lenses and spectral filter arrays,” in 2021 IEEE International Conference on Image Processing (ICIP), 2021.10.1109/ICIP42928.2021.9506357Search in Google Scholar

[116] K. Monakhova, K. Yanny, N. Aggarwal, and L. Waller, “Spectral DiffuserCam: lensless snapshot hyperspectral imaging with a spectral filter array,” Optica, vol. 7, no. 10, p. 1298, 2020. https://doi.org/10.1364/optica.397214.Search in Google Scholar

[117] N. Antipa, et al.., “DiffuserCam: lensless single-exposure 3D imaging,” Optica, vol. 5, no. 1, pp. 1–9, 2018. https://doi.org/10.1364/OPTICA.5.000001.Search in Google Scholar

[118] H. Arguello, S. Pinilla, Y. Peng, H. Ikoma, J. Bacca, and G. Wetzstein, “Shift-variant color-coded diffractive spectral imaging system,” Optica, vol. 8, no. 11, p. 1424, 2021. https://doi.org/10.1364/optica.439142.Search in Google Scholar

[119] T. Kim, K. C. Lee, N. Baek, H. Chae, and S. A. Lee, “Aperture-encoded snapshot hyperspectral imaging with a lensless camera,” APL Photonics, vol. 8, no. 6, p. 066109, 2023. https://doi.org/10.1063/5.0150797.Search in Google Scholar

[120] O. F. Kar and F. S. Oktem, “Compressive spectral imaging with diffractive lenses,” Opt. Lett., vol. 44, no. 18, pp. 4582–4585, 2019. https://doi.org/10.1364/OL.44.004582.Search in Google Scholar PubMed

[121] E. J. Candes, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, 2006. https://doi.org/10.1109/TIT.2005.862083.Search in Google Scholar

[122] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Phys. D, vol. 60, no. 1, pp. 259–268, 1992. https://doi.org/10.1016/0167-2789(92)90242-F.Search in Google Scholar

[123] N. Renard, S. Bourennane, and J. Blanc-Talon, “Denoising and dimensionality reduction using multilinear tools for hyperspectral images,” IEEE Geosci. Remote Sens., vol. 5, no. 2, pp. 138–142, 2008. https://doi.org/10.1109/LGRS.2008.915736.Search in Google Scholar

[124] J. M. Bioucas-Dias and M. A. T. Figueiredo, “A new TwIST: two-step iterative shrinkage/thresholding algorithms for image restoration,” IEEE Trans. Image Process., vol. 16, no. 12, pp. 2992–3004, 2007. https://doi.org/10.1109/TIP.2007.909319.Search in Google Scholar

[125] M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: application to compressed sensing and other inverse problems,” IEEE J-STSP, vol. 1, no. 4, pp. 586–597, 2007. https://doi.org/10.1109/jstsp.2007.910281.Search in Google Scholar

[126] X. Yuan, “Generalized alternating projection based total variation minimization for compressive sensing,” in 2016 IEEE International Conference on Image Processing (ICIP), 2016.10.1109/ICIP.2016.7532817Search in Google Scholar

[127] Y. Liu, X. Yuan, J. Suo, D. J. Brady, and Q. Dai, “Rank minimization for snapshot compressive imaging,” IEEE Trans. Pattern Anal., vol. 41, no. 12, pp. 2990–3006, 2019. https://doi.org/10.1109/TPAMI.2018.2873587.Search in Google Scholar PubMed

[128] L. Huang, R. Luo, X. Liu, and X. Hao, “Spectral imaging with deep learning,” Light: Sci. Appl., vol. 11, no. 1, p. 61, 2022. https://doi.org/10.1038/s41377-022-00743-6.Search in Google Scholar PubMed PubMed Central

[129] H. Yuan, X. Ding, Q. Yan, X. Wang, Y. Li, and T. Han, “Review of reconstruction methods for spectral snapshot compressive imaging,” in Communications, Signal Processing, and Systems, vol. 872, Q. Liang, W. Wang, X. Liu, Z. Na, and B. Zhang, Eds., Singapore, Springer Nature, 2023, pp. 313–322.10.1007/978-981-99-2653-4_39Search in Google Scholar

[130] X. Miao, X. Yuan, Y. Pu, and V. Athitsos, “λ-Net: reconstruct hyperspectral images from a snapshot measurement,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019.10.1109/ICCV.2019.00416Search in Google Scholar

[131] Y. Cai, et al.., “Degradation-aware unfolding half-shuffle transformer for spectral compressive imaging,” in Advances in Neural Information Processing Systems 35 (NeurIPS 2022), New Orleans, LA, USA, 2022, pp. 37749–37761.Search in Google Scholar

[132] Y. Jia, et al.., “From RGB to spectrum for natural scenes via manifold-based mapping,” in 2017 IEEE International Conference on Computer Vision (ICCV), 2017.10.1109/ICCV.2017.504Search in Google Scholar

[133] B. J. Fubara, M. Sedky, and D. Dyke, “RGB to spectral reconstruction via learned basis functions and weights,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2020.10.1109/CVPRW50498.2020.00248Search in Google Scholar

[134] Y. Cai, et al.., “MST++: multi-stage spectral-wise transformer for efficient spectral reconstruction,” in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2022.10.1109/CVPRW56347.2022.00090Search in Google Scholar

[135] A. Y. Zhu, et al.., “Ultra-compact visible chiral spectrometer with meta-lenses,” APL Photonics, vol. 2, no. 3, p. 036103, 2017. https://doi.org/10.1063/1.4974259.Search in Google Scholar

[136] M. Khorasaninejad, W. T. Chen, J. Oh, and F. Capasso, “Super-dispersive off-Axis meta-lenses for compact high resolution spectroscopy,” Nano Lett., vol. 16, no. 6, pp. 3732–3737, 2016. https://doi.org/10.1021/acs.nanolett.6b01097.Search in Google Scholar PubMed

[137] A. Y. Zhu, et al.., “Compact aberration-corrected spectrometers in the visible using dispersion-tailored metasurfaces,” Adv. Opt. Mater., vol. 7, no. 14, p. 1801144, 2019. https://doi.org/10.1002/adom.201801144.Search in Google Scholar

[138] X. Hua, et al.., “Ultra-compact snapshot spectral light-field imaging,” Nat. Commun., vol. 13, no. 1, p. 2732, 2022. https://doi.org/10.1038/s41467-022-30439-9.Search in Google Scholar PubMed PubMed Central

[139] T. Chabot, J. Borne, G. Bédard, and S. Thibault, “Metasurface-based image slicers for integral field spectroscopy,” in Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation V, 2022.10.1117/12.2628825Search in Google Scholar

[140] A. McClung, S. Samudrala, M. Torfeh, M. Mansouree, and A. Arbabi, “Snapshot spectral imaging with parallel metasystems,” Sci. Adv., vol. 6, no. 38, p. eabc7646, 2020. https://doi.org/10.1126/sciadv.abc7646.Search in Google Scholar PubMed PubMed Central

[141] C.-H. Lin, S.-H. Huang, T.-H. Lin, and P. C. Wu, “Metasurface-empowered snapshot hyperspectral imaging with convex/deep (CODE) small-data learning theory,” Nat. Commun., vol. 14, no. 1, p. 6979, 2023. https://doi.org/10.1038/s41467-023-42381-5.Search in Google Scholar PubMed PubMed Central

[142] M. Yako, et al.., “Video-rate hyperspectral camera based on a CMOS-compatible random array of Fabry–Pérot filters,” Nat. Photonics, vol. 17, no. 3, pp. 218–223, 2023. https://doi.org/10.1038/s41566-022-01141-5.Search in Google Scholar

[143] J. Yang, K. Cui, Y. Huang, W. Zhang, X. Feng, and F. Liu, “Deep‐learning based on‐chip rapid spectral imaging with high spatial resolution,” Chip, vol. 2, no. 2, p. 100045, 2023. https://doi.org/10.1016/j.chip.2023.100045.Search in Google Scholar

[144] A. Tittl, et al.., “Imaging-based molecular barcoding with pixelated dielectric metasurfaces,” Science, vol. 360, no. 6393, pp. 1105–1109, 2018. https://doi.org/10.1126/science.aas9768.Search in Google Scholar PubMed

[145] V. Vashistha, G. Vaidya, R. S. Hegde, A. E. Serebryannikov, N. Bonod, and M. Krawczyk, “All-dielectric metasurfaces based on cross-shaped resonators for color pixels with extended gamut,” ACS Photonics, vol. 4, no. 5, pp. 1076–1082, 2017. https://doi.org/10.1021/acsphotonics.6b00853.Search in Google Scholar

[146] T. Wood, et al.., “All-dielectric color filters using SiGe-based mie resonator arrays,” ACS Photonics, vol. 4, no. 4, pp. 873–883, 2017. https://doi.org/10.1021/acsphotonics.6b00944.Search in Google Scholar

[147] A. De Proft, et al.., “Highly selective color filters based on Hybrid plasmonic–dielectric nanostructures,” ACS Photonics, vol. 9, no. 4, pp. 1349–1357, 2022. https://doi.org/10.1021/acsphotonics.1c01983.Search in Google Scholar

[148] A. M. Shaltout, J. Kim, A. Boltasseva, V. M. Shalaev, and A. V. Kildishev, “Ultrathin and multicolour optical cavities with embedded metasurfaces,” Nat. Commun., vol. 9, no. 1, p. 2673, 2018. https://doi.org/10.1038/s41467-018-05034-6.Search in Google Scholar PubMed PubMed Central

[149] J. Lee, et al.., “Compact meta-spectral image sensor for mobile applications,” Nanophotonics, vol. 11, no. 11, pp. 2563–2569, 2022. https://doi.org/10.1515/nanoph-2021-0706.Search in Google Scholar

[150] J. Bao and M. G. Bawendi, “A colloidal quantum dot spectrometer,” Nature, vol. 523, no. 7558, pp. 67–70, 2015. https://doi.org/10.1038/nature14576.Search in Google Scholar PubMed

[151] Z. Wang, et al.., “Single-shot on-chip spectral sensors based on photonic crystal slabs,” Nat. Commun., vol. 10, no. 1, p. 1020, 2019. https://doi.org/10.1038/s41467-019-08994-5.Search in Google Scholar PubMed PubMed Central

[152] C. Hu, S. Zheng, Q. Zhong, Y. Dong, T. Hu, and Z. Xu, “Design of a computational microspectrometer based on metasurfaces and multilayer thin films,” in 2022 Asia Communications and Photonics Conference (ACP), 2022.10.1109/ACP55869.2022.10089055Search in Google Scholar

[153] J. Yang, et al.., “Ultraspectral imaging based on metasurfaces with freeform shaped meta-atoms,” Laser Photonics Rev., vol. 16, no. 7, p. 2100663, 2022. https://doi.org/10.1002/lpor.202100663.Search in Google Scholar

[154] X. Wu, D. Gao, Q. Chen, and J. Chen, “Multispectral imaging via nanostructured random broadband filtering,” Opt. Express, vol. 28, no. 4, p. 4859, 2020. https://doi.org/10.1364/OE.381609.Search in Google Scholar PubMed

[155] Z. Wu, et al.., “Random color filters based on an all-dielectric metasurface for compact hyperspectral imaging,” Opt. Lett., vol. 47, no. 17, pp. 4548–4551, 2022. https://doi.org/10.1364/OL.469097.Search in Google Scholar PubMed

[156] J. Xiong, et al.., “Dynamic brain spectrum acquired by a real-time ultraspectral imaging chip with reconfigurable metasurfaces,” Optica, vol. 9, no. 5, pp. 461–468, 2022. https://doi.org/10.1364/OPTICA.440013.Search in Google Scholar

[157] S. Rao, Y. Huang, K. Cui, and Y. Li, “Anti-spoofing face recognition using a metasurface-based snapshot hyperspectral image sensor,” Optica, vol. 9, no. 11, pp. 1253–1259, 2022. https://doi.org/10.1364/OPTICA.469653.Search in Google Scholar

[158] Q. Zhang, Z. Yu, X. Liu, C. Wang, and Z. Zheng, “End-to-end joint optimization of metasurface and image processing for compact snapshot hyperspectral imaging,” Opt. Commun., vol. 530, no. 1, p. 129154, 2023. https://doi.org/10.1016/j.optcom.2022.129154.Search in Google Scholar

[159] P. Zhou, J. Zhou, Y. Wang, H. Xu, X. Qu, and Y. Li, “The computed tomographic imaging spectrometer based on metamaterial surface,” Opt. Mater., vol. 136, no. 1, p. 113378, 2023. https://doi.org/10.1016/j.optmat.2022.113378.Search in Google Scholar

[160] K. Cui, “Seetrum Technology, the research achievement transformation enterprise of our lab., released new products,” in Nano-OptoElectronics Lab, Department of Electronic Engineering, Tsinghua University. Available at: http://nano-oelab.ee.tsinghua.edu.cn/Home/xxfb/xxfb_1.html?id=624&lang=en Accessed: Jan. 16, 2024.Search in Google Scholar

[161] L. Gao, R. T. Kester, and T. S. Tkaczyk, “Compact image slicing spectrometer (ISS) for hyperspectral fluorescence microscopy,” Opt. Express, vol. 17, no. 15, pp. 12293–12308, 2009. https://doi.org/10.1364/OE.17.012293.Search in Google Scholar

[162] L. Gao, R. T. Kester, N. Hagen, and T. S. Tkaczyk, “Snapshot Image Mapping Spectrometer (IMS) with high sampling density for hyperspectral microscopy,” Opt. Express, vol. 18, no. 14, pp. 14330–14344, 2010. https://doi.org/10.1364/OE.18.014330.Search in Google Scholar PubMed PubMed Central

[163] S. Zhao, Y. Ji, A. Feng, X. Zhang, and J. Han, “Analytical design of a cemented-curved-prism based integral field spectrometer (CIFS) with high numerical aperture and high resolution,” Opt. Express, vol. 30, no. 26, pp. 48075–48090, 2022. https://doi.org/10.1364/OE.477973.Search in Google Scholar PubMed

[164] N. Bedard and T. Tkaczyk, “Snapshot spectrally encoded fluorescence imaging through a fiber bundle,” J. Biomed. Opt., vol. 17, no. 8, p. 080508, 2012. https://doi.org/10.1117/1.Jbo.17.8.080508.Search in Google Scholar

[165] G. Filacchione, et al.., “The integral-field imager and spectrometer for planetary exploration (FISPEx),” in Proc. SPIE 12188, Advances in Optical and Mechanical Technologies for Telescopes and Instrumentation V, 1218809, Montréal, Québec, Canada, SPIE, 2022.10.1117/12.2626982Search in Google Scholar

[166] A. A. Wagadarikar, N. P. Pitsianis, X. Sun, and D. J. Brady, “Video rate spectral imaging using a coded aperture snapshot spectral imager,” Opt. Express, vol. 17, no. 8, pp. 6368–6388, 2009. https://doi.org/10.1364/OE.17.006368.Search in Google Scholar PubMed

[167] S. Yang, H. Qin, X. Yan, S. Yuan, and Q. Zeng, “Mid-wave infrared snapshot compressive spectral imager with deep infrared denoising prior,” Remote Sens., vol. 15, no. 1, p. 280, 2023. https://doi.org/10.3390/rs15010280.Search in Google Scholar

[168] C. M. Wynn, et al.., “Flight tests of the computational reconfigurable imaging spectrometer,” Remote Sens. Environ., vol. 239, no. 1, p. 111621, 2020. https://doi.org/10.1016/j.rse.2019.111621.Search in Google Scholar

[169] X. Yu, Y. Su, X. Song, F. Wang, B. Gao, and Y. Yu, “Batch fabrication and compact integration of customized multispectral filter arrays towards snapshot imaging,” Opt. Express, vol. 29, no. 19, pp. 30655–30665, 2021. https://doi.org/10.1364/OE.439390.Search in Google Scholar PubMed

[170] N. Gupta, P. Ashe, and S. Tan, “A miniature snapshot multispectral imager,” in Proc. SPIE 7660, Infrared Technology and Applications XXXVI, 76602G, Orlando, Florida, US, SPIE, 2010.10.1117/12.852661Search in Google Scholar

[171] S. X. Li, et al.., “Self-powered and flexible photodetector with high polarization sensitivity based on MAPbBr3-MAPbI3 microwire lateral heterojunction,” Adv. Funct. Mater., vol. 32, no. 45, 2022, Art. no. 2206999. https://doi.org/10.1002/adfm.202206999.Search in Google Scholar

[172] L. Y. Mei, et al.., “Hybrid halide perovskite-based near-infrared photodetectors and imaging arrays,” Adv. Opt. Mater., vol. 10, no. 9, p. 2102656, 2022. https://doi.org/10.1002/adom.202102656.Search in Google Scholar

[173] X. Zhu, et al.., “Broadband perovskite quantum dot spectrometer beyond human visual resolution,” Light: Sci. Appl., vol. 9, no. 1, p. 73, 2020. https://doi.org/10.1038/s41377-020-0301-4.Search in Google Scholar PubMed PubMed Central

[174] P. Martín-Mateos, F. U. Khan, and O. B. Manrique, “Direct hyperspectral dual-comb imaging,” Optica, vol. 7, no. 3, pp. 199–202, 2020. https://doi.org/10.1364/OPTICA.382887.Search in Google Scholar

[175] Z. Y. Sun, Y. Li, B. F. Bai, Z. D. Zhu, and H. B. Sun, “Silicon nitride-based Kerr frequency combs and applications in metrology,” Adv. Photonics, vol. 4, no. 6, p. 064001, 2022. https://doi.org/10.1117/1.Ap.4.6.064001.Search in Google Scholar

[176] T. Voumard, T. Wildi, V. Brasch, R. G. Álvarez, G. V. Ogando, and T. Herr, “AI-enabled real-time dual-comb molecular fingerprint imaging,” Opt. Lett., vol. 45, no. 24, pp. 6583–6586, 2020. https://doi.org/10.1364/OL.410762.Search in Google Scholar PubMed

[177] X. Zhang, et al.., “Reconfigurable metasurface for image processing,” Nano Lett., vol. 21, no. 20, pp. 8715–8722, 2021. https://doi.org/10.1021/acs.nanolett.1c02838.Search in Google Scholar PubMed

[178] X. M. Zhang, B. F. Bai, H. B. Sun, G. F. Jin, and J. Valentine, “Incoherent optoelectronic differentiation based on optimized multilayer films,” Laser Photonics Rev., vol. 16, no. 9, p. 2200038, 2022. https://doi.org/10.1002/lpor.202200038.Search in Google Scholar

[179] G. Y. Cai, et al.., “Compact angle-resolved metasurface spectrometer,” Nat. Mater., vol. 23, no. 1, pp. 71–78, 2023. https://doi.org/10.1038/s41563-023-01710-1.Search in Google Scholar PubMed

[180] J. Park, X. Feng, R. Liang, and L. Gao, “Snapshot multidimensional photography through active optical mapping,” Nat. Commun., vol. 11, no. 1, p. 5602, 2020. https://doi.org/10.1038/s41467-020-19418-0.Search in Google Scholar PubMed PubMed Central

[181] X. Feng, Y. Ma, and L. Gao, “Compact light field photography towards versatile three-dimensional vision,” Nat. Commun., vol. 13, no. 1, p. 3333, 2022. https://doi.org/10.1038/s41467-022-31087-9.Search in Google Scholar PubMed PubMed Central

[182] Z. Shen, F. Zhao, C. Jin, S. Wang, L. Cao, and Y. Yang, “Monocular metasurface camera for passive single-shot 4D imaging,” Nat. Commun., vol. 14, no. 1, p. 1035, 2023. https://doi.org/10.1038/s41467-023-36812-6.Search in Google Scholar PubMed PubMed Central

[183] C. Yang, et al.., “Hyperspectrally compressed ultrafast photography,” Phys. Rev. Lett., vol. 124, no. 2, p. 023902, 2020. https://doi.org/10.1103/PhysRevLett.124.023902.Search in Google Scholar PubMed

[184] W. Zhang, et al.., “Handheld snapshot multi-spectral camera at tens-of-megapixel resolution,” Nat. Commun., vol. 14, no. 1, p. 5043, 2023. https://doi.org/10.1038/s41467-023-40739-3.Search in Google Scholar PubMed PubMed Central

Received: 2023-11-30
Accepted: 2024-02-10
Published Online: 2024-03-22

© 2024 the author(s), published by De Gruyter, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 1.6.2024 from https://www.degruyter.com/document/doi/10.1515/nanoph-2023-0867/html
Scroll to top button