Wednesday, November 19, 2008

Chandrayaan-1: imaging moon in 64 colours

If the Terrain Mapping Camera (TRC) can collect topographical data that will help compile a 3D lunar atlas with a 5 metre resolution, the Hyper-Spectral Imager (HySI) will enable the mineralogical mapping of the moon’s surface.
HySI, along with other instruments, will also help in understanding the composition of the moon’s interior.
Developed by the Ahmedabad based Space Applications Centre (the same Centre that developed the terrain mapping camera), the HySI will operate in the visible and near-infrared bands.
As a result, the HySI will be able to collect crucial colour information of the moon’s surface features.
The colour information is collected from 421 nanometre to 964 nanometre wavelength, with a spectral resolution better than 15 nanometres.

Chandrayaan-1 goes around the moon in a north-south polar orbit. It will collect the sun’s light reflected from the moon’s surface in an area detector (frames). This is much the same as any ordinary camera that captures an image in the form of frames.
One frame will correspond to 40 km in the north-south direction and 20 km in the east-west direction.
The 20 km coverage is called the swath. The rectangular frame has 512 pixels arranged in a north-south direction and 256 pixels in the east-west.
Arrays of 512 pixels in a north-south direction can be considered as rows and the arrays of 256 pixels arranged in an east-west direction can be considered as columns.
Each pixel covers 80 metres (hence 256 pixels x 80 metres gives the 20 km swath in the east-west direction). The area covered in the north-south direction depends on for how long the HySI camera captures data. Hence more the duration, more the area covered.The reflected light falling on HySI is split into spectral bands of different wavelengths by a wedge filter. The filter is placed in such a manner that the spectral separation happens in a north-south direction.
Hence each of the 512 pixels arranged in the north-south direction will represent continuously differing spectral wavelengths.
“One end of the array will have 421 nanometre and the other end will have 964 nanometre wavelength,” said Dr. Kiran Kumar A.S., Deputy Director, Sensor Development Area, Space Applications Centre, Ahmedabad.
The pixels arranged in a particular row (256 pixels) in the east-west direction will collect information in the same spectral wavelength.
So in one instant the HySI camera picks up data in different wavelengths. Ideally, data collected by all the 512 rows will help in understanding the mineralogical composition better.
But transmitting the voluminous data will be very challenging. “Onboard processing is done and only 64 spectral bands are transmitted,” said Dr. Kumar.
The data processing is done by combining the data from 8 continuous rows that will cover the same region on the moon at slightly different wavelengths into one data.
This kind of data compression allows the 512 rows of spectral wavelengths to be sent as 64 spectral bands.
“The data compression will result in some data loss,” Dr. Kumar remarked, “but we need to compromise a little as we have to take into account data storage and transfer,” Dr. Kumar said.
Much like the Terrain Mapping Camera, The Hyper-Spectral Imager will be operational only for 20 minutes per orbit. This is because only the well illuminated regions of the moon near the equator will be imaged at any given point of time.
“So the imaging period will be restricted to 60 days in six months. We will have two slots of 60 days each in a year,” he said.
The rate at which the moon will be imaged will be 1.4 km per second. Since the swath (east-west coverage) is fixed at 20 km, 100 seconds of continuous operation will cover an area of 140 km length and 20 km width.
In 20 minutes of operation per orbit, the area of moon covered will be 1,680 km in length and 20 km in width. The higher latitudes, which will not be well lit by the sun, will be covered be increasing the exposure time of the camera.
“We will be able to cover the entire moon in two years’ time,” Dr. Kumar said.
But why choose a wedge filter instead of a prism to split the incoming light into different spectral wavelengths? “We can get a compact system that weighs less only when a wedge filter is used. The complexity and weight increase when we use a prism,” he explained.