In in case of an erupting volcano.
In many parts of the world pre-existing DEMs are not available and so all measurements must be made from the SAR satellite. Use a method that separates the phase effects of topographic and change/displacements by using three SAR scenes. Ideally two scenes should be close together in time and have a large baseline optimized for topography (which should increase the overall accuracy). The other scene should bracket the desired change period, and have a smaller baseline optimized for displacement monitoring (i.e. facilitating easy phase unwrapping). These three SAR scenes are then combined together to obtain two interferograms (one containing topography information, and the other containing topography and change information). The two interferograms are subtracted in a process called differential interferometric SAR (DifSAR) which results in the desired change/displacement interferogram.
Space-borne InSAR is used for topography and displacement measurement and is a widely used technology to determine surface changes in the cm and mm range. Products and results of this method are DEMs and interferograms. As the Earth’s surface is affected of geophysical processes like displacement phenomena, several investigations were done for analysis of natural events such as volcano unrest periods and landslides. SAR played a major role to detect such natural phenomena by providing high-resolution microwave images in any weather condition. Compared to infrared radiation, microwaves have the ability of traversing clouds, fog, and possible ash or powder coverage, in case of an erupting volcano. Besides this, some atmospheric disturbances can affect both the amplitude and, more importantly, the phase of SAR images. The focus lied in this context more on the phase measurement to extract terrain information. A measurement of a single phase is usually not useful, as the absolute wave cycle is rarely known. A comparison instead between two or more phase measurements can lead to a path length difference, that can be a fraction of a wavelength.
This principle of superposition of two or more waves is called interference and is based on the phase shift of two or more SAR images over the same scene taken at different times. For example, one of the two images are the reference image (master), the others are defined as additional images (slave). Phase measurements were not available for a long time, since the phase of the echo was a random variable. When two congruent SAR phases contrast each other, there is a connection between two arbitrary resolution cells. Therefore, the requirement of two transmitter positions with different distances is requested. However, the baseline should come to lie within the system given critical value (maximum distance between the antenna positions), as well as the temporal baseline should not be too strongly stressed, and the wavelength should be congruent by two acquisitions. The result is a path length difference which corresponds to the interferometric phase (0 to 2? or -? to + ?). The phase difference of two acquisitions is often related for a two-way path length difference (?R) and will be noted as the following
Where the phase difference ?? is measured in radians and ? stands for the wavelength. An interferogram can be created by a pixel wise cross-multiplying function. This technique is only applicable when a coherent signal is detected. But especially in practice some of the phase measurements show incoherence and this provides no meaningful information. Also, the interferograms is often influenced by noise (e.g. radar shadow, vegetation/leaf movement). Therefore, only meaningful phase information is required and can be processed. This directs to the topic of coherence, which is the complex correlation between the phase information of the two complex SAR images (Kumamoto 2016). This coherence provides the information how well the two single phase values correlate with each other. This leads to the question of how consistent the prediction of the phase difference is in the interferogram. The calculation of the coherence |?| is stated here
Where p1 and p2 are pixel values and N is the number of pixels in the N-sample window used to estimate the coherence. The magnitude of the complex correlation coefficient ?, called the interferometric coherence and can be used to detect changes in the observed target over time between two acquisitions (Koppel et al., 2015). The magnitude of the InSAR coherence value = 1 – ? (?) ranges from 0 to 1. Note that ? is a complex number and it corresponds to the phase difference and the amplitude and its meaningfulness. A coherence value of |?|= 1 means a complete correlation (fully coherent), where |?|= 0 provides a complete decorrelation. Natural targets tend to lose coherence faster than non-natural targets, giving raise to the use of the coherence parameter for detecting built-up, anthropological areas (Zebker et al., 1997; Ferretti et al., 2007; Koppel et al., 2015; Spaans and Hooper, 2016). (Zebker & Goldstein)
2.8 Characteristics of SAR satellites
Spatial resolution: one observatory unit on the earth surface and how much detailed information is visible in image by human eye. Spectral information: no of bands that a sensor can acquire the information from an electromagnetic spectrum. Repeat interval: time taken by a satellite to complete one entire orbit cycle, ability to collect the image of same area of earth’s surface at different periods of time. L band: It operates on a wavelength of 15-30 cm and a frequency range of 1-2 GHZ (Rosenqvist, 1999) (Fu, Ma, & Wu, 2010). Because of longer wavelengths the L band has more penetration capacity and mostly scattered by branches and trunks of trees. C band: It operates on a wavelength of 4-8 cm and frequency range of 5-8 GHZ. It mainly penetrates through leaves and small branches, so it is less used in biomass estimation. X band: It operates on a wavelength of 2.5-4 cm and a frequency of 8-12 GHZ. Because of smaller wavelengths X band scattered by leaves so it is useful to extract surface information- of trees. HH & VV polarization: HH represents the like polarization that means both incident and reflected signals are horizontal. VV means both incident signal and reflected signal are vertical direction. VV backscattering coefficient shows better correlation with the biomass compared to HH. The combination of both HH & VV gives good results (Matsuoka & Yamazaki, 2000). These are agencies which support SAR operation mission:
i. European Space Agency (ESA): ERS-1, ERS-2, Envisat, Sentinel-1
ii. Japan Aerospace Exploration Agency (JAXA): JERS-1, ALOS-1, ALOS-2
iii. Canadian Space Agency (CSA): Radarsat-1, Radarsat-2, Radarsat constellation
iv. Deutsches Zentrum für Luft- und Raumfahrt e.V. (DLR): TerraSAR-X, TanDEM-X
v. Indian Space Research Organization (ISRO): RISAT-1, NISAR (w/ NASA)
vi. Comision Nacional de Actividades Espaciales: SAOCOM
vii. Italian Space Agency (ASI): COSMO-Skymed
viii. Instituto National de Técnica Aeroespacial (INTA): PAZ
ix. Korea Areospace Research Institute (KARI): KOMPSat-5
x. National Atmospheric and Space Administration (NASA): NISAR (w/ ISRO)
Figure 2.4: It shows the satellites used for SAR mission as per band of time they launchedSource:https://www.unavco.org/instrumentation/geophysical/imaging/sar-satellites/sar-satellites.html
The SENTINEL-1 mission is the European Radar Observatory for the Copernicus joint initiative of the European Commission (EC) and the European Space Agency (ESA). Copernicus, previously known as GMES, is a European initiative for the implementation of information services dealing with environment and security. It is based on observation data received from Earth Observation satellites and ground-based information (Olesk, Voormansik, Pohjala, & Noorma, 2015).
Figure 2.5: The Sentinel-1 SAR Source: https://sentinels.copernicus.eu/web/sentinel/missions/sentinel-1/overview
The SENTINEL-1 mission includes C-band imaging operating in four exclusive imaging modes with different resolution (down to 5 m) and coverage (up to 400 km). It provides dual polarisation capability, very short revisit times and rapid product delivery. For each observation, precise measurements of spacecraft position and attitude are available.
Synthetic Aperture Radar (SAR) has the advantage of operating at wavelengths not impeded by cloud cover or a lack of illumination and can acquire data over a site during day or night time under all weather conditions. SENTINEL-1, with its C-SAR instrument, can offer reliable, repeated wide area monitoring (Moccia, 2010).
The mission data continuity of ERS and ENVISAT was composed to a constellation of two satellites, SENTINEL-1A and SENTINEL-1B, sharing the same orbital plane. Where Sentinel-1A launched on 3 April 2014 and Sentinel-1B launch scheduled for April 2016
SENTINEL-1 is designed to work in a pre-programmed, conflict-free operation mode, imaging all global landmasses, coastal zones and shipping routes at high resolution and covering the global ocean with vignettes. This ensures the reliability of service required by operational services and a consistent long-term data archive built for applications based on long time series (Potin, Rosich, Miranda, & Grimont, 2016).
Sentinal-1 is radar imaging mission for ocean, land and emergency services
It is mainly used to monitoring sea ice zones and arctic environment surveillance of marine environment, maritime security like detecting ship, oil spill monitoring, wind, wave, and current monitoring of land surface motion like subsidence, landslide, tectonics, volcanoes, etc.,
2.10 Characteristics of Sentinel-1
a) Constellation of two satellites (A & B units)
b) C-Band Synthetic Aperture Radar Payload (at 5.405 GHz)
c) 7 years design life time with consumables for 12 years
d) Near-Polar sun-synchronous (dawn-dusk) orbit at 698 km
e) 12 days repeat cycle (1 satellite), 6 days for the constellation
f) Both Sentinel-1 satellites in the same orbital plane (180 deg phased in orbit)
g) On-board data storage capacity (mass memory) of 1400 Gbit
h) Two X-band RF channels for data downlink with 2 * 260 Mbps
i) On-board data compression using Flexible Dynamic Block Adaptive Quantization (FDBAQ)
j) Optical Communication Payload (OCP) for data transfer via laser link with the GEO European Data Relay Satellite (ERDS)
k) Instrument operations constraints
l) SAR modes exclusivity (incl. polarisation schemes)
m) SAR mode transition time (2.4 sec.)
n) SAR duty cycle (25 min/orbit for the 3 high rate modes)
o) Huge volume of data, potentially up to 2.4 TB/day with the two satellites
2.11 Data Distribution:
Sentinel data can access by anyone no distinction is made between public, commercial and scientific uses and it is licenses for the use of sentinel data of free charge. As open and free access to the data will maximize the beneficial utilization of SENTINEL data for the widest range of applications and intends to stimulate the uptake of information based on Earth Observation data for end users.
All SENTINEL-1 SAR data acquired are systematically processed to create predefined product types and are available globally, regionally and locally, within a defined timescale.
Global products will be systematically generated for all acquired data. They include Level-0, detected Level-1 and Level-2 ocean products. These products are made available within 1 hour of observation over NRT areas with a subscription and, in every case, within 24 hours of observation.
Operational Product Availability for Each Level
Table 2.1: It show the different products of Sentinel-1 data
Compressed, unprocessed instrument source packets, with additional annotations and
auxiliary information to support the processing.
Slant-Range Single-Look Complex Products (SLC):
Focused data in slant-range geometry, single look, containing phase and amplitude
Ground Range Detected Geo-Referenced Products (GRD):
Focused data projected to ground range, detected and multi-looked.
Data is projected to ground range using an Earth ellipsoid model, maintaining the original
satellite path direction and including complete geo-reference information.
Ocean wind field, swell wave spectra and surface radial velocity information as derived from
Sentinel-1 SAR can be operated in 4 exclusive imaging modes with different resolution and coverage:
Table-2.2: Main Specifications
GR single look Resolution
Strip Map (SM)
Range 5m Azimuth 5m
HH or VV or HH+HV or VV+VH
Interferometric Wide Swath (IW)
Range 5m Azimuth 20m
HH or VV or HH+HV or VV+VH
Extra Wide Swath (EW)
Range 20m Azimuth 40m
HH or VV or HH+HV or VV+VH
Wave Mode (WM)
23deg & 36.5deg
>20*20 km Vignettes at 100km intervals
Range 5m (TBC) Azimuth 5m (TBC)
HH or VV
For all mode
radiometric accuracy (3?):1dB
Noise equivalent Sigma Zero: -22dB
Point Target Ambiguity Ratio: -25dB
Distributed Target Ambiguity Ratio: -22dB
2.12 Global Navigation Satellite System (GNSS)
GNSS, which stands for Global Navigation Satellite System, which has been used for satellite system with global coverage and is often used when talking about satellite navigation without specifying the system. GPS was the first GNSS. its origin in radio navigation systems. The main application of GPS is to be able to find location and time information anywhere on Earth where there is a free line of sight to four or more GPS satellites under all-weather conditions, As GPS observations are not strictly a remote sensing application, only a few examples of their use for the study of surface deformations the GPS method can record long period horizontal movements.
2.13 Basic concept of GPS
Basically, GPS consists of 3 components segments, the space segment, control segment, and user segment
1. The space segment consists of satellites which broadcast radio signals to users and receive commands from control segment
2. The control segment monitors the space segment and send commands and information to the satellites.
3. The user segment consists of receivers that records and interprets the radio signal broadcasted by the satellites.
Figure 2.6: the figure shows components of GNSS satellite systemsSource: https://www.novatel.com/an-introduction-to-gnss/chapter-1-gnss-overview/section-1/
GPS receivers record the earth’s movements during an earthquake and receiver calculates its position by precisely timing the signals sent by GPS satellites high above the Earth. Each satellite continually transmits messages that include.
1. The time the message was transmitted
2.Satellite position at time of message transmission.
The receiver uses the messages it receives to determine the transit time of each message and computes the distance to each satellite using the speed of light. Each of these distances and satellites’ locations define a sphere. The receiver is on the surface of each of these spheres when the distances and the satellites’ locations are correct. These distances and satellites’ locations are used to compute the location of the receiver using the navigation equations. This location is then displayed, perhaps with a moving map display or latitude and longitude; elevation information may be included. Many GPS units show derived information such as direction and speed, calculated from position changes. In GPS operation, we generally use four or more satellites to have precise and accurate result and thing to be noted is that none of these satellites intersect each other. As a result, we could easily make out that the result that we will conclude by solving the navigation equations to find an intersection; this solution gives us the position of the receiver along with accurate time, which in returns reduces the need of having large expensive clocks. Because of this advance feature we get the accurate timings to manage the disaster like earthquake
2.14 Measure earthquakes by GPS
Earthquakes can be measured in a variety way. Traditionally, earthquake size has been determined by various seismologic methods, which examine the amount of shaking, which directly relates to the energy released in an earthquake.
GPS measures the size of an earthquake by examining the final amount that a station has been displaced in an event. This is done by examining the total distance that a station has moved in an earthquake by comparing its position prior to the event with its position following the event.
Scientists have found that there is a relationship between the amount of displacement caused by an earthquake and its magnitude. It is by using this relationship between slip and magnitude that scientists can measure the relative size of an earthquake using GPS.
GPS is not used to measure the actual shaking of the ground because of the way in which the actual data are collected. Data are sampled at a certain rate, called a sample rate, which means that the receiver records the information being sent to it from the satellites at a certain interval of time all day long.
For example, data can be sampled at a 30-second interval, which means that the receiver records information from the satellite every 30 seconds. That means that if the shaking from the earthquake lasts any less than 30 seconds, it will be missed by the receiver.
Because of this, data are processed, and a daily solution is determined, which means that the change in position of the receiver is calculated for one day at a time by combining the data collected throughout the day. The data can also be processed at another solution interval. For example, data could be sampled at a 1-second rate and processed, but the solutions would be far less accurate than the daily solutions.
This is the reason why GPS is not used to directly measure the ground shaking during an earthquake. Seismometers are much better equipped to accurately record that sort of high-frequency motion than GPS. So, earthquake size is determined instead by measuring the final displacement of the stations and using the slip versus magnitude relationship.
2.15 Navigation equations
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent are designated as xi, yi, zi, ti where the subscript i denotes the satellite and has the value 1, 2, …, n, where n?4 When the time of message reception indicated by the on-board clock is tr, the true reception time is tr + b where b is receiver’s clock bias (i.e., clock delay). The message’s transit time is tr+b-ti. Assuming the message traveled at the speed of light, the distance traveled is (tr+b-ti) c. Knowing the distance from receiver to satellite and the satellite’s position implies that the receiver is on the surface of a sphere centered at the satellite’s position with radius equal to this distance. Thus, the receiver is at or near the intersection of the surfaces of the spheres if it receives signals from more than one satellite. In the ideal case of no errors, the receiver is at the intersection of the surfaces of the spheres.
The clock error or bias, b, is the amount that the receiver’s clock is off. The receiver has four unknowns, the three components of GPS receiver position and the clock bias x, y, z, b. The equations of the sphere surfaces are given by:
(x-xi )2 + (y-yi )2 + (z-zi )2 =(tr+b-ti) c 2i=1, 2, 3, ……, n or in terms of pseudo ranges,
pi = (tr+b-ti) c,
as pi =? (x-xi )2 + (y-yi )2 + (z-zi) 2 –bc,
i=1, 2, 3, …, n These equations can be solved by algebraic or numerical methods.
2.16 Benefits of an earthquake monitoring system
An earthquake early-warning system may provide the critical information needed
(1) to minimize loss of property and lives,
(2) to aid rescue operations, and
(3) to assist recovery from earthquake damage.
The most effective use of earthquake early warning is to activate automated systems to prepare for incoming strong ground shaking. For example: slowing down rapid-transit vehicles and high-speed trains to avoid potential derailment, orderly shutdown of pipelines and gas lines to minimize fire hazards, controlled shutdown of manufacturing operations to decrease potential damage to equipment, and safe-guarding 6 computer information by saving vital information and retracting disk heads away from the disk surface. All the above can be accomplished to a useful extent with a few seconds’ notification. Although human response may take more than a few seconds, personal safety could be greatly enhanced if people were alerted: school children could seek cover under desks and workers could move away from hazardous positions. More importantly, early earthquake notification might reduce panic and confusion. The functions of a modern society, including civil and military operations, will be less likely to turn into chaos if an early earthquake notification is available and drills for appropriate actions have been performed. For example, the Mexico City Alert System (and associated programs to educate the public) demonstrated its usefulness during the September 14, 1995 earthquake (Espinosa-Aranda et al., 1995; 1996). For an earthquake early-warning system with sufficient numbers of well- distributed accelerometers (such as the real time system in operation in Taiwan) (Wu, et al., 1997), we can estimate quickly the maximum expected ground-motion caused by an earthquake (i.e., a shake map), so that emergency response teams may be dispatched where they are needed most. In practice, such a shake map will be revised and updated as more information is received. In addition, an inventory of man-made structures and their vulnerability must exist so that loss estimation from an earthquake can be quickly assessed to aid disaster response and recovery. The usefulness of this approach is recognized, especially after the Northridge earthquake (Goltz, 1996; Eguchi et al., 1997). Recently, the Federal Emergency Management Agency (FEMA) of the United States introduced a risk assessment methodology Hazards United States (Hazus) to assist emergency managers in estimating earthquake risk in their jurisdictions (Nishenko, 1998). The shake map produced by an earthquake early warning or rapid notification system is required for Hazus’ approach in estimating loss after an earthquake