Friday, November 28, 2014

Lab 10:Object-Based Classification

Goal and Background
                The goal of this lab exercise was to become introduced with object-based classification. This classification is a very accurate classification method which uses both spectral and spatial information to create land use and cover classifications. Object-based classification creates shapes for objects and uses these objects are then classified based on their spatial and spectral features. eCognition Developer64 by Trimble is the software used for this lab. This software has object based systems that ERDAS Image is lacking.
Methods
                The first part of the lab was to create a new project. The new project is created by selecting the new project icon. Importing image layers gives the image that is to be used in this lab. The image is of Chippewa and Eau Claire counties. In the Create Project dialog is where all the project information is. Here the Use geocoding box is selected this allows for the image layer information to be displayed. Using a single value for all layers is chosen. To set the image as false color open the Edit Image Layer Mixing. Here layers 4, 3, and 2 are set as RGB in that order. A process tree is created which allows creation of a parent process. The parent process is called <0.001s Lab10_Segment. A child is inserted. The child segmentation parameters need to be edited. Level 1 is the level name, the scale parameter is kept at 10, 0.2 and 0.4 are for shape and compactness. This has created image objects that look like so. 
Objects created from Object-Based Classification



                Each class name and class color needs to be set. This is done under the Classification tab and then Class Hierarchy. Right clicking on anywhere in the Class Hierarchy will show the Insert Class option. The classes created were forest as dark green, agriculture as pink, urban/built-up as red, water as blue, and green vegetation/shrub as light green. The type of classifier needs to be chosen. The classifier for this is Nearest Neighbor. The pop-up screen that appears after selecting NN as the classifier will determine the values of the layers. The Mean is selected. This shows that all features of the mean will help with classification. Next, is to declare samples. This is found under Classification as Select Samples. In the Samples menu to choose the class simply clicking the class sufficed. Double-clicking an object will select that object as the sample. In the Process Tree on the Lab 10 segment select Append New and create a classification process. In the Insert Child dialog, in Algorithm Parameters, selecting all the classes makes sure that they will be used in classification. Executing in the Edit Process dialog for the classification process will classify the image. Manual editing can be done if the image is not up to standards. This is found under the Manual Editing toolbar. Creating a polygon around misclassified pixels and then selecting the class is a way to reclassify.


Results 
The Object-Based Classification looks to be more accurate than some of the other advanced classification methods.

Object-Based  Classification

Lab 9: Advanced Classifiers 2

Goal and Background
                This lab was a continuation on how to use advanced classifiers. The advanced classifiers for this lab were a decision tree and an artificial neural network. The lab helped with an introduction into these types of powerful advanced classifiers. A decision tree helps to define land classification with hypotheses, rules, and variables. An artificial neural network tries to mimic human brain activity when deciding land classifications. It runs numerous iterations before the end product can be classified.

Methods
                The first part of the lab was to use an expert system classification to improve upon an existing image. Expert systems use ancillary data to help improve images and have very high accuracy assessments. The first stage in expert system classification is to create a knowledge base. Within the knowledge base are rules that help classification. The image used in this part of the lab was eau_cpw_al2011cl.img. This is an image of the Eau Claire-Chippewa Falls-Altoona area and there are some wrongly classified pixels. In ERDAS Imagine the Knowledge Engineer is used this is found under the Raster tab. Knowledge Engineer has three main components which are hypotheses, rules, and variables. Hypotheses are planned LULC classes, rules are functions that help prior imagery and ancillary data, variables are the previous images and ancillary data. Hypotheses are created in the Knowledge Engineer. They are selected by choosing the hypotheses icon. For this lab the hypotheses were water, urban/built-up, forest, green vegetation, and agriculture. To create rules the corresponding icon is used. This creates a new rule. Each rule will have a name corresponding to the class. For water “WTR” was used. After rules new variables are to be created. For this the variable was ec_cpw_al2011. Variable type is raster. The image for this lab is au_cpw_al2011cl.img. The following classifications are done a similar way. The variable does not need to be changed in rules props. Each class is given a value of either 1, 2, 3, 4, or 5. To help wrongly classified pixels ancillary data is used. Creating separate classes for already existing classes is done as well. The first new class is other urban and it is colored as yellow. The new rule is OTR. The new variable is other_urban and is given a value of 1 in rule props. The second row in rule props is changed to ec_cpw_al2011.img and is given a value of 2. This creates an argument that pixels classified as urban in ec_cpw_al2011 and that are within the ancillary data should be reclassified as other urban. The urban rule is opened to help create a new rule for the urban_other class. Urban_other is in the second row. The rule is Urban_other !=1. This means that urban_other is not classified as urban. The same is done for green vegetation and agriculture. Arguments are written to help classify wrongly classified land and rules are made to show that the new agriculture and green vegetation are not classified as their parent classes. A Knowledge Classification is now ran. In the Knowledge Classification window all available classes are selected. Cell size should be 30x30. Each class should be changed so that there is only 6 classes; water, residential, forest, green vegetation, agriculture, and other urban.
The reclassified image using Knowledge Engineer

                The second part of this lab is a neural network classification. This part is carried out in a different remote sensing software called ENVI. In ENVI the new image is can_tmr.img. RGB colors are set and the available bands are set to 4, 3, 2. Regions of Interest training sets were to be used. In the main image menu Region of Interest is selected. CLASSES.ROI restore the regions of interest. This put 3 regions of interest into the image. These images appeared to be bare soil, forest, and agriculture. These 3 regions help classify the entire area. To use neural network classification the process is Classification, Supervised, Neural Net. Can_tmr.img is the input file, in the parameters the 3 regions are selected to classify, the logistic radio button is the activation method, and 1000 iterations are entered. The final step of this part was to create our own land classifications. Three ROIs were selected for the campus of the University of Northern Iowa. These three classes for me were buildings, grass, and sidewalks. The ROIs are created using polygons placed around pixels.

Results 
Neural Network Iterations

UNI neural network image, personal classification


Tuesday, November 11, 2014

Lab 8: Advanced Classifiers

Goal and Background
                This lab was twofold in the fact that it introduced and familiarized two advanced classification algorithms. The two classifications were linear spectral unmixing and fuzzy classifier. These algorithms are extremely helpful in increasing the accuracy of a classified image. The advanced classifiers use high-powered algorithms to achieve these high accuracies.
Methods
The first art of the lab was to use linear spectral unmixing. For this section another image processing software called ENVI was used. ENVI is used so that pure pixels or endmembers can be viewed. To start the image that was used is ec_cpw2000.img. In ENVI there is an Available Bands List. RGB Color is selected instead of Gray Scale. Here is where the layers of the image are put in. This is similar to layer stacking in ERDAS Imagine. Layer 4 was put into the red color gun, layer 3 into the green, and layer 2 into the blue. This created a False Color image that will be the image used for endmember selection. Next was to create a principal component image. This is six different images for each reflective band. The image is ec_cpw2000pc. PC Band 1 and 2 contain information for all the bands. The image, PC Band 1, along with the ETM+ image are loaded into ENVI. Next the scatterplot is made. To create the scatterplot there needs to be ab X and Y axis. PC Band 1 is X and PC Band 2 is Y. The scatterplot is created and is triangular in shape. To collect endmembers a circle is drawn around one of the vertices. The first endmembers that are collected are bare soil. To create these circles the Class tab was selected, then Items 1:20, and finally Green. A green circle was drawn around one of the outer vertices. The pixels on the ETM+ image that corresponded to this vertex are highlighted green. The same procedure is followed to find water and agriculture endmembers. Water endmembers are set to blue and agriculture is set to yellow. These endmembers need to be saved in the form of an Export All. This opens the ROI Tool window. The ROI file should be saved. The procedure to make the scatterplot is carried out again to find urban areas. Instead of PC 1 and 2, PC 3 and 4 are selected for X and Y. Exporting this class as an ROI is done. Finally, linear spectral unmixing can be done. This is done by Spectral > Mapping Methods > Linear Spectral Unmixing. Ec_cpw2000.img is the input image. This will lead to importing the endmember classes. The output image for this lab is ec_cpw2000frac. This creates four separate images that highlight areas in white that are highly correlated to the endmember classes.
The second part of this lab is fuzzy classification. Fuzzy classification is an advanced classifier that is set out to help increase accuracy. The process of carrying out this classification is very similar to a supervised classification. Like a supervised classification numerous training samples are taken. All the training samples of a class are merged like in Lab 5. The signature files are saved. To perform a fuzzy classification the supervised classification window was selected, the image ec_cpw2000.img is input, the saved training sample signatures is the signature file, an output is created. Now here is where the change occurs selecting of the fuzzy classification button and distance file button, maximum likelihood is the parametric rule, non-parametric rule is feature space, and 5 classes for the best classes per pixel.

Results
Fuzzy Classification LULC Map



Sunday, November 2, 2014

Lab 7: Change Detection

Goals and Background
                The main goal of lab 7 was to gain knowledge in change detection with land use/land cover. Digital change detection is very important to remote sensing as it shows environmental and socioeconomic progression or even digression over periods of time. For this lab there were two visualization methods used to look at land cover change over time. The first was a write function method which highlights change from two different images. The other was a From-To change method which showed a specific land cover change from one to the other. Another part of land cover change is to find the percent in change which was demonstrated with an excel table in the following methods.

Methodology
                The first part of the lab focused on change detection using Write Function Memory Insertion. The basis of write function memory is using near-infrared bands from two dates to highlight changes in land. To accomplish this change detection method 3 images are needed. The area for this lab was Eau Claire and surrounding counties. For this lab the 3 images were used an August 2011 image from the red band, band 3, called ec_envs_2011b3.img the other 2 images were from the same area from 1991 and were also the same. These 2 images were ec_envs_1991_b4.img and ec_envs_1991_b4copy.img. These 3 images are layer stacked in ERDAS Imagine software and saved as a new image. This image being ec_envs91-11chg.img. To show the change the image bands need to be switched. This is found under the Multispectral tab. The 2011 image should be in the Red color gun and the 1991 images should be in the Green and Blue color guns. The new image now shows changes of land by highlighting them as red. The image shows a lot of change in urban areas and this can be explained by the ever-changing environment of cities and populated areas.

                The second part of the lab was using a different change detection method. The From-To change shows the change of an area and explains what it changed to. The area of this method is the Milwaukee Metropolitan Statistical Area and the years are 2001 and 2006. The first step was to look at the quantitative data and change of the area. This step is done in Microsoft Excel. Two columns were created for each image. The first column was the class of the LULC image and the second column was the histogram in square meters. To convert the histogram values into meters multiply the histogram by 900. This gets square meters for the value. The next step was to convert square meters into hectares. All that is done here is multiplying the square meters by 0.0001. Once all the conversions are done for both 2001 and 2006 finding the percent change for the Milwaukee Statistical Area needs to be done. Percent change is calculated by subtracting the 2006 hectares from the 2001 hectares, using the increase divide by the 2001 hectares and multiply by 100. This will give the percent change from 2001-2006, which there can be positive or negative change.

2001
2006
Hectares
Hectares
Water
15182.91
15272.82
59%
Open Space
32644.53
36899.1
13%
Urban/built up
89209.89
92993.76
4%
Bare Soil
1177.92
1456.2
23%
Forest
48051
46895.31
-2%
Shrub
5936.31
5431.77
-8%
Agriculture
158188.41
151771.23
-4%
Wetland
44820
44490.78
-0.70%
Total
395210.97
379938.15

                The final part of the lab was to create an LULC map using a model. This model was created by Dr. Cyril Wilson and a colleague at Indiana State University and is called the Wilson-Lula algorithm. The equation for the model is as follows
ΔLUC = [IM1(v1….vn) – vt = set{0,1a}] [IM2(v1….vn) – vt = set{0,1b}]  = 1a & 1b.
                ΔLUC is the From-To change class, IM1 is the image for the first date, IM2 is the second image from the second date, v1….vn are the class values. Vt are classes not used for a sub-model, set{0,1} mask the classes not used in this model but highlights ones that are used, 1a is from the pixel value of classes, and 1b is to the pixel value of the classes. The model for this uses two raster objects, 10 function objects, 10 raster objects, another 5 function objects, and another 5 raster objects. The 2001 and 2006 images are put into the top two raster objects. The functions use the algorithm above. In the two sets of functions is where the from-to change occurs. The first function of the two is the original class from 2001 and the second function is what it will change to. The functions will be set like this, but change pertaining to their from-to classes: EITHER 1 IF ($n1_milwauke_2001==7) OR 0 OTHERWISE. Under the functions are raster objects. These rasters are temporary rasters and should be sets as integer. Under the second raster sets is the second functions. These functions will show the areas of change by showcasing fucntions similar to $n13_memory & $n14_memory. The final raster is the raster output. Each of these five outputs are named for their from-to change classes. Once these rasters are saved they are displayed on a map showcasing the change.

Figure 1: MSA Model for From-To Change Detection


Results

Figure 2: Write Function Memory Insertion for lab 7


Figure 3: LULC map for Milwaukee Statistical Area