Tuesday, October 28, 2014

Lab 6 Accuracy Assessment

Goal and Background
The main goal of lab 6 was to assess the unsupervised and supervised LULC maps created in labs 4 and 5. Using accuracy assessment is a necessary step in outputting LULC maps out to the public and potential users of the maps. If the accuracy of LULC map is too low it cannot be used and then the producer may have to redo the map(s). To assess the map ground reference testing was used. This lab introduced how to collect and use these tests. Ground reference tests use a reference image to compare pixels of the reference image to the LULC image.

Methodology

Part 1 of the lab was to assess the unsupervised LULC map from lab 4, the image being eau_Chipp2000rcF.img. The reference image, eau_chip05.img was added to a second viewer in ERDAS Imagine. This reference image is a high resolution image of the Chippewa and Eau Claire Counties study area. The reference image is an image from 2005 like the LULC map. To begin the accuracy assessment navigation to the Accuracy Assessment window must be done. It is found under the Raster tab through the Supervised option. The Accuracy Assessment window is opened through this path. To choose the image that is to be assessed it must be selected from the proper folder. This being the lab 4 folder. Input of the eau_Chipp2000rcF.img was done. This makes sure that it is the proper image. The title of the Accuracy Assessment window should change from “No Title” to the image title. To choose the reference image for comparison there is the Select Viewer icon. By clicking this icon a pop up will appear which tells the user to click in the reference image. Clicking the reference image selects it for assessment. Anywhere in the image can be clicked. The next step is to change the reference points’ colors. This was done by clicking View then Change Colors. Points without a reference will be white and points with a reference will be yellow. Adding random points was the next step. The path for this is Edit then Create/Add Random Points. The number of points that were needed was 125 points. Making sure Stratified Random was clicked is integral. Only the 5 classes from the LULC map were used. The process for selecting these classes is found in Select Classes in the Add Random Points window. A minimum of 15 points for each class is selected. This places random points all over the reference image. To make it easier to compare points only 10 points were assessed at one time. To show only 10 points select the points and then View and then Show Current Selection. This process is done until all 125 points were referenced. To reference the points found the point on the map and classify it as 1 of the 5 classes (water, forest, agriculture, urban/built-up, or bare soil). This referencing of the LULC maps was also done for the Supervised classification map. The process for doing this is exactly the same except for which classification to use. Once all the referencing was done classification accuracy report was made. This report has the user’s accuracy, the producer’s accuracy, the kappa statistic, and the overall classification. The accuracy assessment shows how many pixels were classified correctly out of random selection.

Figure 1: Showing ground reference points
Figure 2: Showing each reference point as a LULC class.

Figure 3: The original accuracy assessment report
Results

Below is the accuracy assessment report of the unsupervised classification method. One can see that the accuracy of this is too low for proper use.

Unsupervised
Class
Water
Forest
Agriculture
Urban/built up
Bare soil
Row Total
Water
15
0
0
0
0
15
Forest
1
33
11
1
0
46
Agriculture
0
2
24
1
1
28
Urban/built up
1
1
9
6
0
17
Bare soil
0
2
15
0
2
19
Column Total
17
38
59
8
3
125

Overall classification accuracy = 64%                                                        Overall Kappa Statistics = 0.52
Producer’s accuracy (Omission Error)                                                      User’s accuracy (Commission Error)
Water = 88.24%                                                                                                Water = 100.00%
Forest = 86.84%                                                                                                                Forest = 71.74%
Agriculture = 40.68%                                                                                       Agriculture = 40.68%
Urban/built-up = 75.00%                                                                              Urban/built-up = 35.29%
Bare soil = 66.67%                                                                                            Bare soil = 10.53%


Here is the supervised classification accuracy assessment. Like the unsupervised accuracy assessment, the supervised is also too low for use.

Supervised
Class
Water
Forest
Agriculture
Urban/built-up
Bare Soil
Row Total
Water
2
0
0
0
0
2
Forest
0
6
0
0
0
6
Agriculture
0
48
11
0
2
61
Urban/built-up
1
0
2
2
1
6
Bare Soil
0
13
30
2
4
50
Column Total
3
67
43
4
7
125

Overall classification accuracy = 20%                                                        Overall Kappa Statistics = -0.02
Producer’s accuracy (Omission Error)                                                      User’s accuracy (Commission Error)
Water = 66.67%                                                                                                Water = 100.00%
Forest = 8.96%                                                                                                  Forest = 100%
Agriculture = 25.58%                                                                                       Agriculture = 18.03%
Urban/built-up = 50.00%                                                                              Urban/built-up = 33.33%
Bare soil = 57.14%                                                                                            Bare soil = 8.00%

Tuesday, October 21, 2014

Lab 4: Unsupervised Classification

Goal and Background
The main goal of lab 4 was an introduction to classification methods. These classification methods are built around extracting information from a remotely sensed image and interpreting the subject well enough for classification. The lab allowed for an understanding of the execution of recoding pixels into classes for land use/ land cover. Land use/land cover (LULC) maps show pixels classified into similar classes relating to what the use of the land is and what type of land is there. For instance in this lab forest, agriculture, water, urban/built up, and bare soil were the classes chosen. Different spectral signature ranges are grouped together to classify images. The classification used in this lab was an unsupervised classification which allows an algorithm in the ERDAS program to group the spectral ranges together and then the user must identify what the pixels should be.

Methodology
The first part of this lab was to run a type of unsupervised classification algorithm. This algorithm is called Iterative self-organizing data analysis technique or ISODATA. To run this algorithm an input raster needs to be in ERDAS. The raster image for this was called eau_chippewa2000.img.
Figure 1: The subsequent image for this lab

To get to the classification tool the steps to follow are Raster, then Unsupervised, and Unsupervised classification. This will open the Unsupervised Panel shown below. Under Clustering Options the Isodata option was checked. Next, an output file was made. The number of classes was set to 10, which means that pixels would be group into 10 classes depending on their pixel value. Clicking Initializing Options was outlined in the process just to make sure that Principal Axis was set (it is the default). After this making sure that the Approximate True Color button was checked under Color Scheme Options. The Maximum Iterations was 250. The output file is named eau_Chipp2000usp.img.
Figure 2: The unsupervised panel which sets all the parameters.


 After all these parameters were changed the model was ran. This model created pixel clusters that were classified together. To view the classes selecting the Table tab, then Show Attributes. This shows the 10 classes created in the unsupervised classification. Next was to recode the 10 classes into 5 and to give them proper colors instead of the pink and cyan of the original unsupervised image. The 5 classes were water (blue), forest (dark green), agriculture (pink), urban/built-up (red), and bare soil (sienna). To recognize what was being viewed in the classes it is advisable to connect to Google Earth from ERDAS. The reason for this is for better classification. Incorrectly classifying agriculture as urban/built-up would cause an odd image to appear. Also, a very important step would be to change the Google Earth date to 2005 so that there is a relationship between the image in ERDAS and Google Earth. Once everything was synced then reclassifying could begin. To identify the first class the color was changed to gold to the pixels in that class. Identifying the pixels showed that this particular class was water. The pixel color was changed to water to properly identify it on a LULC map. The class name was changed to water as well. The reclassifying process was done with each class. Each class was changed to gold in a procedural way and the corresponding class color and name was given. To save all this classification process simply closing the table will suffice, but it will ask to save which should be done. The recoded image should be saved as eau_Chipp2000rc.img through the process of File, then Save As, and finally Top Layer As.
Figure 3: 10 class raster attribute table

The second part of the lab was to increase the pixel clusters of the first part of the lab. This time instead of 10 classes 20 classes was the minimum and maximum parameter. The Convergence Threshold was set to 0.92. This output image was named eau_Chipp2000usp2.img. The recoding of this image was done the same as the first image, but this time with 20 classes. This reclassified image is now called eau_Chipp2000rc2.img. To make the image easier to analyze the Raster Attributes were changed. The steps for this were File, View, View Raster attributes. The column properties need to be accessed by clicking on the column icon in Raster Attributes. In column properties the hierarchy of the column should be class names, color, and histogram. Under class_names the display width was set to 15 and the max width to 20. 
Figure 4: 20 class raster attribute table

Figure 5: Column Properties window

The third part of the lab was to simplify the 20 classes into 5 classes. The eau_Chipp2000rc2.img is the image that was reclassified. To recode these images Thematic and then Recode were clicked this opened up the recode window. Each class was recode into 1 of 5 classes. Water was recoded as 1, forest as 2, agriculture as 3, urban.built-up as 4, and bare soil as 5. This final image is now saved as eau_Chipp2000rcF.img
Figure 6: Recoded values (5 classes)


Results
Figure 7: The 10 class LULC image
Figure 8: The 20 class LULC image




Sources
Satellite imagery collected from Landsat satellite imagery used through ERDAS Imagine 2013. All processes ran through ERDAS Image

Monday, October 13, 2014

Lab 3 Atmospheric Correction

Goal and Background
                The purpose of this lab was the introduction and beginning experience with different forms of correction of remotely sensed imagery. Remotely sensed images can become skewed and visually displeasing and that is where the use of correction comes in handy. By having corrected images there is a more comprehensible output and view. Multiple methods were used for this lab. These methods were empirical line calibration, dark object subtraction, and multidate image normalization. Empirical line calibration is the method of correcting remotely sensed images using spectral information from spectral libraries. Dark object subtraction being another method of correction uses a number of variables to correct an image. These variables are sensor gain, offset, solar irradiance, solar zenith angle, atmospheric scattering and absorption, and path radiance. The last correction, multidate image normalization, is used when no in situ data can be found. Multidate image normalization uses two different dated images to correct an image. An image can be corrected by finding the same object in the two different images and using the spectral reflectance from the object for correction.

Methods
                In the first part of this lab the correction method that was used was the empirical line calibration method. The image that was corrected was an image titled eau_claire2011. To correct this image ERDAS Imagine was used specifically the Spectral Analysis option. To get the Spectral Analysis Workstation click on the Raster tab, next Hyperspectral, then Spectral Analysis Workstation. This will open up the Spectral Analysis Workstation. Here the image of Eau Claire was added. At first the image is in the wrong color band combination. This is changed to false color infrared. The main procedure here is to select the Edit Atmospheric Correction option. This is signified as a certain icon. Once here the method that is chosen is Empirical Line. With this chosen a spectral plot will appear. This is to show a spectral sample. Numerous spectral signatures were taken using the Create a point selector tool. This tool is designated as a crosshair. This crosshair takes the spectral signature and plots it.  The goal is to get very similar spectral signatures for comparison and correction. Once all signatures are collected the image was ready to be corrected. To allow for calculations the information was saved as an aad. file. To get a temporary product the procedure of selecting View, Preprocess, and then Atmospheric Adjustment was done. The final image was saved as a preprocessed image. Once all was saved checking the spectral signature was the final step to view the product. The spectral signature of the first and final images are different and can be viewed in the Spectral Profile.
Figure 1: Spectral Analysis Workstation with Edit Atmospheric Correction Icon


                The second part of the lab was atmospheric correction using enhanced image based Dark object subtraction. Like the first correction method this uses an equation to correct the image, but the variables of the equation need to be entered manually. This equation Lλ = (LMAXλ-LMINλ/Qcal max – Qcal min)(Qcal – Qcal min)+ LMINλ is put into model maker in ERDAS Imagine. The variables of this equation are as follows. These variables can be found in the metadata file.
 Lλ = At-sensor spectral radiance in [W/(m² sr µm)]
 Qcal = Landsat image (digital number DN)
 Qcalmin = Minimum quantized calibrated pixel value corresponding to LMINλ
 Qcalmax = Maximum quantized calibrated pixel value corresponding to LMAXλ
 LMINλ = Spectral –at sensor radiance that is scaled to Qcalmin [W/(m² sr µm)]
 LMAXλ = Spectral –at sensor radiance that is scaled to Qcalmax [W/(m² sr µm)]
To get the newly corrected image a model was built in model maker. Three objects were placed in the model those being a raster object, a function, and an output raster. 6 models were built for each band (1-5,7). Each original band was put into the raster object. The equation above with the proper variables was the function, and an at-satellite spectral radiance image was the output raster. The image that was created above is not void of all atmospheric interference. To do this another equation was needed. This equation is Rλ = ∏ * D² * (Lλ - Lλhaze)/ (TAUv * Esunλ * COS θs * TAUz). The variables are as follows
Rλ = True surface reflectance.
 ∏ = Mathematical constant equal to 3.14159 [unitless].
 D = Distance between Earth and sun [astronomical units].
 Lλ = At-sensor spectral radiance image.
 Lλhaze = path radiance.
 TAUv = Atmospheric transmittance from ground to sensor.
 Esunλ = Mean atmospheric spectral irradiance [W/(m² µm)]
θs = sun zenith angle (or 90- sun elevation angle).
 TAUz = Atmospheric transmittance from sun to ground. (TAUz values are different for TM bands, seen below)
TM Band TAUz
Band 1 0.70
Band 2 0.78
Band 3 0.85
Band 4 0.91
Again a model was built this time with the original output raster images as the raster image, the above equation with proper variables becomes the function, and the output raster images will be the final corrected images. Before the image can be viewed in the proper way the bands needed to be layer stacked. Once all were stacked the final image was created.
Figure 2: A model similar to the ones used in this lab

The third and final part of this lab was to use multidate image normalization. Again, an equation was used to correct an image. Images of Chicago in 2000 and 2009 are both being used for correction. The 2000 image is the image being used for correction and the 2009 image is the image that needs correcting. For this correction sample points of same objects on the images were taken. 6 from Lake Michigan, 5 from ubran areas, and 4 from waterways. The signatures are viewed in a Spectral Profile Viewer. The pixel values were needed for the lab. To view the pixel values the procedure of going to View, then Tabular Data. These pixel values’ means are entered into Microsoft Excel. These are the means of all the layers and the 15 selected spectral values. These means were needed for a regression analysis which the variables were created using a scatter plot. After the variables were found they were input in an equation for atmospheric correction. The variables were R2 and the slope equation. This equation was Lλsensor = Gainλ * DN + Bias. The variables are represented as
Lλsensor = At satellite radiance image.
Gainλ = a multiplicative component (the regression coefficient)
DN = Subsequent image band (chicago2009.img image band(s)).
Biasλ = the regression equation intercept

The slope equation Y=mx+b is used where m is the Gain and b is the bias. As before a model was needed for each band. The raster is the 2009 image, the function is the equation above, and the output raster is the new image.
Figure 3: Scatter plots essential for completion of Multidate Image Normalization


Results
Below are the image results from the lab



Here are the results of atmospheric correction. The first image is the Multidate Normalization, the second is Dark Object Subtraction with  an error in the equation which skewed the results (if this image comes out incorrectly some editing is needed in the equation), the third image is Empirical Line Calibration. 

Sources
All formulas and original images were provided by Dr. Cyril Wilson from the University of Wisconsin-Eau Claire. All models and image operations were done in ERDAS Imagine 2013