Sunday, February 23, 2014

Field Activity #4: Conducting a Distance Azimuth Survey.

Introduction


The fourth assignment in Geography of Field Methods was to conduct a distance azimuth survey using different geographic techniques.  The assignment called for students to find an open area where conducting many survey points could be easy to do.  These survey points were to be taken by a compass and a laser device to gather the distance and azimuth also, every point should have have some type of information attached with them.  After the points were gathered the next step is to enter them into an excel file and import them into ArcMap.  Then the bearing distance to line tool was used to plot the data on ArcMap to view the results of the points taken.   This activity was designed to have the class become familiar with taking distance and azimuth points using the laser device and a compass.  

Study Area

A big part of the assignment before any points could be taken, was picking a study area to conduct the survey.  The point on which to take points must be relatively open with no trees, poles, or any items that would block the image when looking at it from above.  The area must also have a good amount of features  to gather points, for example: trees, poles, garbage cans, or any item that is can be easily seen from a distance away.  Our group, Drew, Andrew, and I, choose our base point on the campus of University of Wisconsin Eau-Clair, in between Phillips and Schneider Hall and mapped points towards and away from campus.  However, after not running out of features to collect we decided to move into the campus mall to find more features to collect.  A total of 100 points were taken with three different base points.  

Figure 1: This image is showing 2 out 3 base points we used when collecting data.  The
3rd point can be seen in images below.  This aerial image of UWEC is very easy to use
because there are few trees.
Methods
Points were collected using two different instruments: a compass which determines azimuth and a laser device which determines distance, in meters, and azimuth.  Both techniques will be used when collecting points.  Just mentioned earlier the first step when conducting a distance azimuth survey is to find an area easy visible from an aerial photo but with many features to conduct the survey.  After finding the base point the next step is to complete the survey.  Our team of 3 decided to survey different types of features, trees, poles, garbage cans, bike racks, and tables.  Andrew used the laser device to collect points and Drew and I switched off collecting points with the compass and writing down the distance and azimuth of each point.  It is important to have two ways of collecting data in case one fails you, usually the technology.  As our course instructor, Dr. Joe Hupy says, "Technology will always fail you", for this reason it was important to collect the azimuth of the point with both the laser device and compass. 

Figure 2: This is an image of drew collecting data with the lazer device from our first base point.
By simply pressing a button the azimuth and distance are given on the screen of the device. 

Figure 3: It is very importing to remain in the same spot when collecting points.  Utilizing the snow we
made foot imprints to know where to stand each when surveying points. 

As each point was collected they were put into our notebook into four categories: number, azimuth, distance, and type.  The collection was done fairly quickly once the group got into a rhythm and really started to gather points.  We did have some troubles remembering which tree or pole we collected but it was corrected when tracing our steps backwards.  After collecting 50 points in between Phillips and Schneider hall we decided to go on to campus mall to collect 25 points at two locations.  This was done because the assignment called for 100 points to be collected.  The assignment also had us collect points with both the lazer and the compass.  For each point the lazer and compass were used and the two were compared, which can be seen later in this blog. 

Figure 4: The notebook which we used to combine our data.  It was sorted into to 4
different groups: Number, Distance, Azimuth and Type. 
After collecting the data the next step is to enter the data into an excel sheet and import it into ArcMap.  The four categories containing 100 points was entered into an excel sheet and six decimals were attached to the numbers.  It is important to attach six decimal points other wise for some reason ArcMap will not be able to use the data and the results will not be able to be seen.  Along with entering the data into excel the latitude and longitude of the three base points must be found.  Drew, group member, did this by using an app on his phone that collects an lat long point by just clicking a button.  Our first base point had an X, Y (latitude, longitude) of 44.79769, -91.499 as you can see in figure 5 below.

Figure 5: This is a portion of the excel sheet before it was entered into ArcMap.  6 decimal points were
used when importing it into to ArcMap. 

Magnetic declination is the angle between the magnetic north and the true north.  The compass will point to the magnetic north leaving room for error when collecting points.  NOAA has application  an that will calculate the degree of declination for any location.  Eau Claire the degree of declination is 1.36 degrees west (negative).  1.36 degrees was subtracted from ever azimuth collected form the laser and compass.

The next step of the assignment was to import the excel file into ArcMap and display it as a map.  First a geodatabase was created to store all the files that were going to be created and then a basemap was imported from USGS to show an image of UWEC's campus in 2013.  Next tools were used to create point and lines of the 100 different points collected.  The bearing distance to line tool was used, which can be found in Data Management>Features>Bearing distance to line, to give use a line from the base point to the point surveyed.  This tool was used three different times because we used three different base points.  The bearding distance to line tool would error when trying to only use one excel sheet instead of three because of the three different X, Y coordinates.   Also the two was ran twice for each point, once to get points from the lazer and points from the compass.  After the bearing distance to line box was filled out correctly the tool was completed and lines appeared on our map.  

Figure 6: this image is an example of the first two base points after using the bearing distance to line tool.
The yellow and gray lines are compass points and the red and purple lines are the lazer points. 

Next to add points onto the lines the lines the feature vertices to points tool was used.  This can be found in Data Management tools>Features>Feature Vertices To Points.  This tool will simply add a point on to the end of each line to make the map easier to understand and compare to the real world features.
Figure 7: this picture is showing the map after the tool, feature vertices
to points was used for the first two base points.

Once all of these process were down only little tweaks were needed to complete the final map.  The 100 lines with points all appeared on the map coming from three different base points.   The feature layers created were saved to the geodatabase and a projection of WGS 84 was used since latitude and longitude were being used instead of meters. 

Results

The final results of our map were fairly accurate.  I was not pleased with the difference between the compass and lazer collection, the compass was way to far off compared to the lazer and real world points.  This could be because we were not precise enough using the compass, the compass was damaged and not working correctly, or something else.  We did notice that the compass was not working correctly when we were standing next to the pole at our first base point as you can see in figure 2.  However, when we changed base points we were not next the the pole and the compass was still significantly off from the lazer azimuth points. 

Figure 8: This is a small to medium scale of the image.  All the points appear in lime green.  Some
error occurred as you can see points appearing in the street or on top of buildings which were not
surveyed by us.  However the our base points were very accurate because the app used by my partner drew.  

Figure 9: Red dots= Compass points.  Green dots= Laser point.  When comparing the two different
colors there is inconsistency.  In some cases they are very close to each-other in others they differ greatly.
Also sometimes the lazer point is way off and other times the compass point is off.  This makes it very
difficult to understand which device failed us.  

Conclusion

Overall the surveying tended to be somewhat accurate, combing both the compass and lazer points would give a very accurate map of the features surveyed in this area.  Our distance measurements were very accurate along with the base points being perfect.  This allowed us to only make errors when recording the azimuth.  The results we got were fairly good but could have been better and there is room for improvement if this assignment was done again.  With out us knowing if the lazer actually detecting the feature we wanted or bouncing off something else it can be less accurate than the compass, which I would have not predicted.  However, the compass failed at some points too, one reason could be magnetic disturbances but that should have affected the lazer too unless it takes this into account.  

Collecting data points can be done in many different ways, using azimuth and distance can give a quick an accurate results.  This can be done in almost any weather and can be done the old fashion way, the compass, or by using new technology, the lazer.  This activity has taught me that using new technology may not always be the best.  When looking at the results the lazer data points some are not accurate compared to the compass showing that in some cases it is wise to use both new and old technology.  

Sunday, February 16, 2014

Field Activity #3: Unmanned Aerial System Mission Planning


Introduction
The goal of this exercise is to improve critical thinking when planning for different scenarios encountered by geographers. Five different scenarios were given with a goal of devising a plan on how to solve the scenarios.  While planning for the scenarios the use of a UAS (Unmanned Aerial System) was highly recommended to be a big factor in the solving process because the scenarios involved an image of the area to be taken.  For each scenario a plan was thought through to include: costs, type of UAS, type of sensor, GIS software, time of year and any other factors that were needed to complete the process.  However, because of the inexperience of the class, only the leg work of the scenarios were thought through to give an overview on how to solve the mission. 
Scenarios

 Scenario 1
v  A military testing range is having problems engaging in conducting its training exercises due to the presence of desert tortoises. They currently spend millions of dollars doing ground based surveys to find their burrows. They want to know if you, as the geographer can find a better solution with UAS.

Using UAS to survey for desert tortoise burrows is a much quicker and more cost effective way to discover where the burrows are compared to ground based surveys. There are two main options that can provide high quality data for this kind of survey; LiDAR and supervised classification using aerial imagery.

LiDAR can be used for this project because it collects elevation data in the form of a point-cloud. The LiDAR sensor shoots a laser at the ground and as the beam is reflected back it records the elevation it was reflected at. The LiDAR sensor requires a large UAS because of its weight so most rotary propeller UASs are out of the question but some fixed wing options will work such as in figure 1 below


figure 1: A fixed wing UAV, capable of being equipped with a LiDAR sensor

Once the LiDAR data has been processed a DEM (digital elevation model) will be created. After knowing how deep the tortoise burrows are a base height should be set that is that many feet/inches above the base height of the data. This will create a DEM with the negative elevation representing the tortoise burrows.

This option is costly but if millions of dollars are being spent on ground based surveys it would be well worth it to use a UAS in this fashion. A second option which will most like be much less expensive would be to fly a UAS and to have it take images of the ground and from these images use a supervised classification to automatically pick out where any tortoise burrows may be.

A supervised classification works by having the user select representative areas using reference sources such as high resolution imagery. The software then characterizes the statistical patterns of the representative areas and classifies the image. The use of a multi-band camera makes the classification scheme much more accurate. This is because the camera records data from a scene as individual color values. From these values a spectral signature can be derived. Using this signature, software such as ERDAS Imagine, will select pixels on the image which are within a specified range of the signature creating an image with one color representing a specific feature such as blue for all water.

This will reduce time in discovering tortoise burrows because the burrows have a unique spectral signature. Since the upturned soil will stand out from the ground it will be easy to select the burrow on an image and specify that all pixels with similar spectral signatures should be classified the same.

This process does involve some ground truthing to verify that the classified burrows are actually burrows and not randomly selected pixels on an image that happen to be similar. Having the person classifying the images will be best because they will know the exact area of where the burrows are.  

A camera that captures imagery in multiple bands that would be excellent for this kind of task is the UltraCam shown in figure 2 below. This camera will produce high quality images with the capability to be used in a supervised classification.

Figure 2: Ultra Cam camera capable of taking images in panchromatic, red,
green, blue, and infrared channels


Scenario 2
v  A power line company spends lots of money on a helicopter company monitoring and fixing problems on their line. One of the biggest costs is the helicopter having to fly up to these things just to see if there is a problem with the tower. Another issue is the cost of just figuring how to get to the things from the closest airport.

Instead of using a helicopter and having someone investigate power line issues it would be much safer and more cost effective to use a rotary UAS (unmanned aerial system). The rotary UAS will be able to fly extremely close to the power line without risk of major damage to the pilot or anyone else if it comes in contact with the line. This is because of how the propellers on the UAS are positioned; they allow for a stable flight with the ability to make sharp turns. Figure 3 shows an image of a rotary UAS. Notice how the propellers are evenly distributed around the center of the vehicle. Pictures of any damage can be taken with ease because the rotary UAS is able to hover in place and can provide not only pictures of the damage but real time video of any issues.

Figure 3: Rotary UAS equipped with a camera, propellers allow
the the camera to stay stable

A major advantage to using a UAS like this is that you can launch and land the vehicle from virtually anywhere. Not only will this rid the need of an airport but it will also eliminate having to waste time waiting for a helicopter to arrive near the power line. Having a helicopter fly close to power lines creates an issue of pilot safety and also the safety of anyone who may be on the ground. Cameras can take amazingly high quality images from a distance but even then you could receive higher quality by using a similar camera mounted onto a rotary UAS and have it fly in and hover much closer to the power line.

A disadvantage to using the UAS is that typically these types of vehicles have less flight time. This is where a helicopter outdoes the UAS. Even though the flight time may be less the cost of a potential injury to anyone involved in surveying is nonexistent with the UAS since the pilot can be stationed almost anywhere.

Scenario 3
v  A pineapple plantation has about 8000 acres, and they want you to give them an idea of where they have vegetation that is not healthy, as well as help them out with when might be a good time to harvest.

When examining the task of finding healthy vegetation over an 8000 acre area the cheapest option I can think of would be to download a LANDSAT image for the area then examine the infrared color band. LANDSAT is an abbreviation for Land Remote-Sensing Satellite which is in orbit around the world with an interval rate of 16 days for the newest satellite (LANDSAT 8). What that means is that every 16 days there will be a new image for the same area. LANDSAT has sensors which are able to record light reflectance from the ground similar to what a normal camera would do but it can also record the infrared energy being emitted which can be used for vegetation analysis because the healthier a plant is the more infrared energy it will emit which will be recorded by the sensor. The files downloaded from LANDSAT represent each band the satellite records light in (red, blue, green, infrared, shortwave infrared, etc.). These bands come in black and white TIFF files which are able to be used/opened in virtually any kind of image manipulation software. The TIFF files are black and white because of how the sensor records the color for each band. For anything blue, such as water, the pixels that make up the water will have a higher pixel value than pixels for land. The same principal applies to green objects such as plants and grass and so on for other colors. The infrared band will give higher pixel values to pixels representing objects that emit more infrared radiation than other objects. The infrared band would be opened using any kind of standard image viewing software. The more white an area is the more infrared energy being emitted thus the healthier the vegetation. In figure 4 below you can see that agricultural fields are much healthier and ready to be harvested than other natural areas in the image. 


Figure 4: Landsat image of healthy vegetation appearing in white, the red
circles are showing the healthy vegetation
This option is completely free as long as you have an internet connection and a way to unzip the downloaded file then be able to view the files. Although this option saves a lot of money it does have a few downfalls. First, since the satellite is on a 16 day interval you won’t be able to have images be taken on demand and even if you find an image for a date you want there is a chance it could be filled with clouds which would distort or even block the ground altogether. Assuming you go with this method of using the LANDSAT images you may run into an even bigger problem which would make you start over completely; satellite failure. This has already happened to the previous LANDSAT 7 satellite. The images taken from LANDSAT 7 would be of similar quality to LANDSAT 8 but they include a large amount of missing pixel data so all of the images produced are virtually useless for any kind of analysis like checking on the health of a pineapple plantation.

A second option would be to attach an infrared camera onto a fixed wing UAS (unmanned aerial system) and have it fly over the plantation recording infrared radiation producing an image which would be very similar to the one produced by LANDSAT. Figure 5 below shows an infrared camera capable of being attached to a UAS. This option of using a UAS will include a cost of a couple thousand dollars, most of which going to infrared sensor and UAS, but the money saved in not having workers check on the entire plantation’s health might be worth it. By using the UAS you would be able to have on demand infrared images taken of the plantation instead of waiting and hoping that the image from LANDSAT is of high quality.

Figure 5: Infrared scanner capable, used to take images in infrared

To discover the best time to harvest you could examine the infrared images to see when the plantation is mostly white meaning healthy. By using LANDSAT images you have access to images from previous years so you could start to see a trend in when the plantation is at its peak health and ready to be harvested. The LANDSAT images would give a good approximation of time to see this trend but the use of a UAV with an infrared would give a better look at exactly when the plantation is at peak health. Since LANDSAT is free to use it may not be a bad idea to investigate those images and to use the UAV in conjunction.
Scenario 4
v  An oil pipeline running through the Niger River delta is showing some signs of leaking. This is impacting both agriculture and loss of revenue to the company.

First many factors need to be accounted for, the agriculture could be also affected by other factors including a drought, bad soil, and over production.  Also the Niger River is known as being one of the most polluted rivers in the World, thus fixing the oil leaking might not lead to wasted agriculture area or crops.  Many questions will need to be asked before starting the project including: what time of the year is it?  This will affect the river water level and the spread of the oil.  If the Niger River water level is high the disperse of the oil leakage will be effecting the crops more.  Also, the description of the crops should be known, are they being harvested at this time or is the season in a transition?  First an image of the area should be taken to find out where the leakage is occurring.  When looking for an oil leakage, areas of black should be identified, the color of oil.  Also the area of black will be most heavy near the leak and then start to spread out as it travels down the river.  If the river is relatively clear, which should also be known before taking the image, the oil leak should be relatively easy to find.  This image can be taken either by an UAV (unmanned aerial vehicle) controlled by a computer or by a balloon, depending on the expense of the equipment and weather.  The disadvantage of using a UAV to take the image is it will be expensive ranging in the thousands, but it will be the easiest and most efficient way to take the image with the range the UAV can have.  A ‘normal’ high quality camera should be fine for finding the oil leak, no special effects on the camera or image should not have to be used.  The advantage of using a balloon to take the image is it will be very cheap and relatively easy to use compared to flying a UAV.  The disadvantage is the balloon may be hard to control with the wind and the range the balloon has compared to the UAV will be less.  However, a third option can be used, to get more accuracy, to determine the oil leakage by looking at vegetation health using a near infrared sensor. The health of the agriculture should be in most danger surrounding the oil leakage then getting healthier when moving away from the leak.  The near infrared image will show the healthy vegetation appearing in white and the unhealthy vegetation converting from gray to black.  Knowing where the agriculture is most unhealthy will help determine the area of oil spill.  This device will be more expensive and will have to be used by an unmanned aerial system because of the risk of losing the sensor. 

Using the UAV to take an image of the Niger River Delta to find the oil leak is the best option in this scenario.  It will on the higher end of the cost but with a serious problem, like an oil leak, the best option should be used.  Also using a near infrared scanner to look at vegetation could also be used along with the UAV.  After these steps are taken and clean images are produced the oil leak should be able to be found and fixed, helping the revenue and stopping the contamination of crops.  Two links that sell UAS; the first is less expensive of less quality and the second being more expensive and having more options of UAVs.



Figure 6. CAPTION: Image of a UAV, being placed in the air ready to be flown
around and used to capture images.  The military uses UAVs to
capture aerial images of images. 
  
Scenario 5
v  A mining company wants to get a better idea of the volume they remove each week. They don’t have the money for LiDAR, but want to engage in 3D analysis.

In order for you to figure out how much ore you are removing from the open pit mine, you will need to obtain 3 dimensional images of the mine to ultimately create a DEM (digital elevation model) of the mine. Obtaining these 3 dimensional images can be done through Photogrammetry camera systems mounted on a fixed wing UAS. Photogrammetry camera systems have automated film advance and exposure controls, as well as long continuous rolls of film. Aerial photographs should be taken in continuous sequence with an approximate 60% overlap. This overlap area of adjacent images enables 3 dimensional analysis for extraction of point elevations and contours. Once the images have been shot by the fixed wing UAS, a technique called least squares stereo matching can be used to produce a dense array of x, y, z data. This is commonly called a point cloud. A DEM image like the one below (figure 7) can then be modeled in ArcGIS to accurately reflect contours of the mine as well as the elevation levels of the mine.


Figure 7: Digital Elevation Model, showing elevations.  Red higher
elevation and blue lower elevation
Since you will know the elevation levels of the mine, every new DEM created with subsequent point clouds will reflect the elevation changes that occurred over a given period of time. This change in elevation will allow you to see the volume of ore being taken out of the mine. Obtaining an elevation point cloud with a fixed wing UAS equipped with a photogrammetry camera system, is much faster than manually surveying the mine. It can be done as often as needed with relative ease, saving your company massive amounts of time and ultimately money. This method is not as accurate as using LIDAR data, but it is much cheaper. If you were to take weekly readings of the mine using LIDAR you would spend a fortune on data collection. I see photogrammetry as your most viable option if you are set on taking weekly volume tests.

Sunday, February 9, 2014

Field Activity 2: Visualizing and Refining Terrain Survey

Introduction

This project is a follow up to the previous project, creation of a digital elevation surface.  The task was to import the data collected from the sand box terrain as an excel file into ArcMap and ArcScene.  Then to create a surface that best represents the data collected using an interpolation method.  These methods include IDW, Natural Neighbors, Kriging, Spline, and TIN which will be explained in further detail about what which is later in the blog.  The next task of this assignment was to reevaluate the data and possibly revisiting the sandbox and taking more data points (if any areas seem weak) to enhance the image.

Methods

After surveying the sandbox terrain and collecting data points the next step is import the excel file into ArcMap or ArcScene.  The difference between the two is that ArcScene has the capability of viewing images in 3D.  This project uses different elevation points making ArcScene very useful when converting the points into an image.  The first step is to add the excel file into ArcScene then converting the excel file into a point feature class.  This can be done by clicking file>add data> and filling out the rest correcting to fill needs.  A step by step process of how to convert data into a point feature class can be found here.  No coordinate system or units were used in the process because the data would become skewed and changed if unites were used.  Points will then appear on ArcScene differentiating in elevation, now ready to be converted into different interpolation techniques.  Using the ArcToolbox and various techniques under 3D analyst the point feature class was converted into the five techniques I mentioned earlier.  
Figure 1: Image of the ArcToolbox in ArcScene
3D Analyst Tools opened
After each tool was used to create an image in form of IDW, Natural Neighbors, Krging, Spline, and TIN, the best product was chosen to represent the data which I will explain in my discussion of the blog.  A step of this activity was to revisit the sandbox and collect more data.  However, my group did complete this step because the data collected was very accurate to our original model.  The team did a great job of coming up with an easy and efficient way to collect many data points by using the rope system I mentioned in my previous blog .  Also another factor in which we did not collect more data points is because there were several snow storms which ruined our surface.  It would be very hard to re-create the exact surface we used in our original data collection.   It was a group decision not too recollect data and all thought that it was not necessary and inefficient.

This section will discuss the different interpolation techniques used on ArcScene to create surfaces to view the data elevation points.  More on each technique can be found here on a ArcGIS help page.

IDW 
IDW stands for Inverse Distance Weighted, it estimates cell values by averaging the values of sample data points in the neighborhood of each processing cell (ArcGIS Help). The IDW was created in ArcScene to show a 3D image to better represent the collection of data.  420 points were added to the number of points box when creating this tool from the tool box.  420 points were added because our group collected 420 points.
Figure 2: 3D IDW
Red- high elevation, Blue- low elevation

Natural Neighbors
This method finds the closest subset of input samples to a query point and applies weights to them based on proportionate areas to interpolate a value.  Natural neighbor works by weighting each point by how close it is to other points in the same from the cell being used (ArcGIS Help).  However, this time no points were added to the map because it was not required, similar to they style of TIN.  This image was created in ArcMap and is a 2D representation of the data collected.
Figure 3: 2D image of Natural Neighbor
Brown representing high points and Blue low points

Kriging
Kriging is an advanced geostatistical procedure that generates an estimated surface from a scattered set of points with z-values.  Kriging determines height by looking at each value compared to other values.  I cut the the number of points in half and entered 210 for the number of points in the kriging creation box.  9 classes were used to represent the data green being the deepest and dark red being the highest elevation.  This is a 3D model of the data created in ArcScene.  
Figure 4: 3D Kriging
Red-high points, Green- low points

Spline
Spine uses an interpolation method that estimates values using a mathematical function that minimizes overall surface curvature, resulting in a smooth surface that passes exactly through the input points (ArcGIS Help).   420 points were also entered when creating this method making it similar to natural neighbor and IDW.  Spline also shows cone shaped areas that were not present in our actual creation of the data.  The creation in the snow was very smooth and flat in most areas.  

Figure 5: 3D Spline flipped to show
the other end of the creaton
Figure 6: 3D Spline
Red - high elevation Blue - low elevation













TIN
TIN stands for Triangulated irregular network and uses Z values and cell centers to fully cover the perimeter of the surface (Arc Help).  The TIN method displayed the data in a very unique way compared to the rest of the methods.  Digital triangles are placed between the nearest data points connecting the triangles.  
Figure 7: 3D TIN 
Discussion

After creating all five of the interpolation methods, I came to the conclusion that the kriging method best represents the data.  The kriging method, figure 4, best represents the data because of the smoothness and the easiness to interpret the data.  Compared to the other 4 methods, kriging has a smoothness and clearness about the image created.  Only using 210 points compared to the others where 420 were used contributed to the best image being created.  If I had been more educated in ArcGIS and ArcScene the outcomes could have been a lot different, but since I am still an amateur in this area most of the creations were not clean and hard to read.  A big key in choosing the kriging method was how it represented the deepest part of the model.  Looking through each figure it is easy to tell the bottom right corner is very deep and hard to understand.  The kriging model best represents this corner with the smoothness it represents. 
     Figure 2, the IDW technique, was the worst out of the five when creating a continuous surface that displayed the data.  Those cone shapes you can see in the image look very bad and it does represent the data in a good way at all.  I believe that the image is very skewed and cone shaped because there were 420 points collected. If there were less points collected then the image would become more clear and possibly useful.  
     Natural Neighbor represented the data very well but was not my favorite.  This is very similar to kriging but it does not appear as smooth as kriging because of some jaggedness in the surface when changing colors.
     Kriging represent the data collected by my group the best.  The continuous surface created looks very smooth as each color blends into each other making the image appear clean.  I used 9 classes to represent the data.  The bottom right corner where the data is the deepest it looks very clean and easy to read compared to the rest of the methods. 
     The spline method is very similar to natural neighbor as it represents the data in an un-smooth way.  The area where the data is most deep is badly represented and difficult to read in a 3D form in ArcScene, making spline not as useful as some other techniques.
     Figure 7, I really liked how deep the TIN data would appear in ArcScene, really displaying the different elevations that were present in the surface.  However, I would not choose this method to best represent the data collected because sometimes the triangles do not represent the smoothness of data and show them as triangular shapes instead.  The data created by our group did not represent any triangles when creating the model, making the TIN method hard to represent the data.  

     
Conclusion

This field activity along with combing the week before was very fun and challenging to complete.  It definently tested critically thinking skills to come up with creating a surface and a grid system to measure the elevation.  The hardest thing for me was converting the data into continuous images on ArcGIS.  I have never created any of these types other than TIN, with my inexperience with these methods I found the Arc help menu very useful to create  the images.  I thought the team I belong to did a fantastic job in completing the task in a efficient and successful way allowing the project to be completed without many troubles.  


Field Activity 1: Creation of a Digital Elevation Surface

Introduction

Geospatial Field Methods is a course taught at University of Wisconsin Eau-Claire by Joe Hupy.  This course is designed to help students become familiar with using geographic field techniques outside of the classroom as a 'hands on' learning style.  A key goal of the course is to have students critically think about concepts and use our skills to fix problems or answer questions.  This first assignment was a field activity that had the class, split into groups, create digital elevation surfaces.  then to measure these box by creating our own grid system that contained a X, Y, and Z measurements.  These surfaces were created outside in sand boxes, we designed the changes in elevation with snow by digging and building up the snow.

Methods

The first task of the project was to build the structure so our group could take measurements.  The plan was to flatten the top of the surface and dig down from there.  This was down to set up our grid system and to make the measurements much easier.  We then proceeded to dig down using our hands and shovels trying to include a ridge, hill, depression, valley, and plain.  After the digging was completed the grid system was set up, measuring the side of the sand box every three inches and marking it with a pin.  Then ropes were pinned down tightly and stretching across the sand box, making rows and columns. You can see this grid system, using ropes and pins in the picture below.
The snow was sprayed with water
so the snow would freeze
creating accurate measurements. 

Image of our grid system.  It took a long time setting up
each rope along with the temperature being zero degrees. 



















The next step was to take measurements, the long side of the sand box was given Y coordinates and the short side was given X coordinates.  Also with this measurement calling for elevation to be displayed, a Z coordinate must used to read the height of the surface.  The measurements were taken in centimeters and were done in an ordered fashion, X1, Y1, and Z would equal the height of the surface.  Our group used a measuring stick to take measurements and the stick was placed in the upper right corner of each square to take consistent measurements.  The measurements were immediately placed into an excel spread sheet as one of my group members had his lab top outside to type in the numbers.

My group taking measurements, one reading off the
height of the surface, and the other typing it into his
lab top 

This is a portion of our groups excel
spreadsheet.  Containing X, Y, Z, and
the number of points. 



















A total of 420 measurement points were taken, giving our team a very accurate reading of the elevation surfaces.  It was decided by our team that we will take close measurements of each area of the sand box because the terrain was very different.  

Discussion/Conclusion

Creating our digital elation surface was very fun and challenging.  It made the group and I critically think about the best way to shape our sandbox and take measurements.  Making the measurements all below zero was a great idea to make the process go along much smoother.  I would predict the results that come in the next week will be the most accurate of all the groups compared to their real life structure.  Our grid system using the strings to carefully measure each X, Y, and Z coordinate was very accurate and zero guessing was need when taking the measurements.  Creating the images on ArcGIS should look very similar to the original structure created.