Eye Tracking Geologists in the Field


_MG_0822

Abstract: The history of eye-movement research extends back at least to 1794, when Charles Darwin’s grandfather, Erasmus Darwin, published “Zoonomia” which included descriptions of eye movements due to self-motion. For the next 200 years eye tracking research was to been confined to the laboratory. That all changed when Michael Land built the first wearable eyetracker at the University of Sussex and published a seminal paper  entitled “Where we look when we steer”. Inspired by Land’s work, a group cognitive scientists, computer scientists, computer engineers and geologists have been working to extend knowledge of how we actually use vision in the real world. I was fortunate enough to participate in this ground-breaking experiment earlier this year, and I wanted to share the experience with the geology community! In this blog article I will give a brief summary of the project I was involved in and the things I learned that can really help you be a better field geologist!

Untitled-1

How do we look at a scene?
Most animals do not simply gaze at a scene – we simply don’t have the necessary resources to take in every single thing in front of us. Instead, our brains have developed a cunning cheat system whereby we target important aspects of a field of view and build up an image in our mind from the important selected areas. The bandwidth and processing power of our eye-brain circuitry need therefore only deal with a small portion of the image at once, and lots of interpolation can be done at source in the brain. This can be illustrated in the figure below:

Fig.1 Unlike birds, most animals build an image of the field of view before them by targeting important areas sequentially.

Fig.1 Unlike birds, most animals build an image of the field of view before them by targeting important areas sequentially.

Figure 1a shows what a field of view may “look” like to our brain before any fixations are made. Using our limited understanding of this new scene our brain may then target key areas of the field (fig. 1b-c) of view for further investigation. Fixations of varying durations and acquisition order are made, allowing our brain to, in real time, develop a better understanding of what is before the eyes. This process continues and continues the longer we look at a scene.

This process has been proven to be heavily influenced by the individual. The specific way we approach deconstructing this scene depends on a whole host of factors. This could be gender, intelligence, familiarity with the surroundings, experience with the scene, state of mind (distraction, tiredness) -basically any environmental factor that can affect the brain may change the way our brain and eyes go about investigating this scene. We may therefore use eye-tracking to see how different categories of people look at the same scene. To do this, eye-tracking software can gather the following information:

  • Fixation Points
    • Location, duration, sequence, saccade types
  • Pupil Dilation/constriction
  • Blinks

Eye tracking technology is therefore a powerful tool in cognitive research that may be used to access the brain. Here are some examples of common applications of eye-tracking:

  • Commercial Applications
    • User interface design
    • Marketing and product placement
    • Targeted marketing
  • Primate/infant/adult/geriatric research
  • Safety
    • Fatigue detection
    • Concentration detection
  • Sports Training
    • Motorsports
    • Ball sports
  • Accessibility
    • Communication tools for disabled people
    • Advanced methods for computer-human interaction
  • Medical
    • Laser Eye Surgery
    • fMRI, MEG, EEG

Eye-tracking is of particular interest to the commercial sector. Interestingly, commercial applications exploit the brain and the eyes: for instance, there’s a proven reason that advertisements at the top of the google search results page cost the most…

Heatmap of fixations of a google seach result page. (http://highongoogle.net/images/seo-bolton-google-eyetracking.jpg)

So what does this have to do with geology? Well geologists are just one of many communities target by this new research. As mentioned in the abstract, eye tracking technology has only recently become mobile enough to be taken out of the laboratory and into the real world. Geologists are often confronted with new scenes in the field and must use their eyes and brains to really understand what it is that they are looking at. Therefore, by taking amateur and professional geologists into the field and conducting eye-tracking experiments, we can gain insights into differences in ways professionals and novices approach visual problems.

The Study of Geologists in the Field
A joint study between Rochester University and Rochester Institute of Technology (RIT) has for the past five years using a hefty NSF grant ($2m) to research geologists in the field. Principal investigators include Robert Jacobs, Jeff Pelz, and John Tarduno. The beautiful wearable eye-trackers were developed by Jason Babcock. The research has been conducted in a variety of environments, but the part that I took part in was a 10-day field excursion to the Western USA to visit some truly amazing geological localities.

First, lets take a look at the technology that I was wearing for the 10 days, and how it worked:

Backpack – Contains 1 Apple Mac Book Air, the powerhouse of the mobile eye-tracking unit and home to the custom-built processing software.

_MG_0175

Head Unit – Contains a front-facing camera above the right eye, and an eye-facing camera and IR bulb for filming eye movements. Also we had to wear a ridiculously large sombrero to shield the cameras from direct sunlight.

_MG_0187

Network – All the devices were connected to a local network, the router for which was being carried around here by Jeff Pelz. The custom software meant that he could use his iPhone to get real-time footage from any participants front-facing or eye-facing cameras at any time. This was useful for adjustments and also keeping participants looking where they should during experiments.

_MG_0678

The Vans – The network extended back to the vans too. This van was the tech-van and contained massive hard drives for backing up all the data. Each evening, the RIT folk would sit in this van and process/backup the data.

_MG_0178

Calibration – Each time we wore the mobile eye-tracking units, we had to complete a series of calibration exercises such that our fixations could be mapped onto the video of our front facing cameras. This was done by standing a few meters away from a calibration spot (as seen below on the back of Tomaso’s notebook) and rolling the head whilst keeping our eyes fixated on the spot. As if we didn’t look stupid enough!

_MG_9800

Gigapan Images – Meanwhile, other members of the tech team were taking panoramic GigaPan images of each scene onto which the tracking data could be overlaid.

_MG_9517

A typical stop would work something like this (somewhat similar to a roadside execution…):

  1. On approach to the locality we were told via radio not to look at the surroundings
  2. We’d get out of the vans, get the trackers on and calibrate
  3. Someone would then lead us to the viewing point – all the while we had to look down at our feet
  4. At the viewing point we would be given a question to address in our minds – often something like “What is the evidence this is a tectonically active area?”
  5. We were then told to look up and analyse the scene in silence for around a minute
  6. After the allocated time was up we then had to answer questions about what we had observed
  7. Following questions we would then be given an explanatory guide to the geology by John Tarduno

The Results
Unfortunately, I wasn’t allowed to know the conclusions of the study thus far during the trip – this would ruin the experiment. Similarly, if you ever think that you are going to have the chance to take part in a similar study – STOP READING NOW. Despite the lack of information I was privy to, I did manage to get some of the conclusions of the study from the authors before I left. What I am allowed to divulge is pretty intuitive, but may help you and even your students learn to analyse scenes better.

Put simply, professional geologists make fewer, longer and more systematic fixations when looking at scenes of interest. This makes sense – your brain targets the information in a scene that is going to tell you the most useful information to address the problem in mind: i.e. what’s the geological history of this outcrop? Conversely, the students in this experiment made lots and lots of short-lived fixations all over the scene in random places as they tried to search for something they might understand.  Here are some visualisations of the differences between students and experts.

_MG_0689

The locality above is in Owens Valley, CA (36.60594N, 118.07511W) – fault scarp that resulted from an earthquake in 1872. It is has ca. 15ft of right lateral slip, and 8ft of vertical slip. As you can see from the image above, the expert realises that he/she is faced with a recent fault escarpment. The expert analyses the break in slope and the presence of the boulders on the escarpment. The novice, however, sees no significant feature in the dusty ground and looks to the mountains in the distance and local hills to see if there is anything obvious. The novice’s path of fixations is chaotic and short, returning to the same points briefly for no reason, then heading elsewhere.

Untitled_Panorama4

The above scene is of a hanging valley located in Yosemite National Park (37.71769N, 119.64846W). The hanging valley represents the valley of a tributary glacier, and is now drainage for water. The cliff face represents the side of the valley calved by the main glacier – this main valley was deeper and so now the tributary valley “hangs”. The expert fixations for this spot are therefore right on the money. The expert looks at the slight u-shape to the hanging valley, acknowledges that there is still drainage here (reinforcing this is in some way a small valley), and the expert also notices the steep valley sides – likely caused by glacial activity. The novice however is distracted by the pretty rainbow and waterfall, and fails to see any real significant features in what to them is just a cliff face.

Summary
I had a great time in the USA taking part of the study, and I learned some new and reinforced some old really valuable field skills:

  • Make sure you do your research on the geological context of the area – we weren’t allowed to do much reading at all, and it really makes it tough when you are just dumped in a completely new tectonic/geological setting with no warning
  • Keep your eyes open all the time – even when driving from spot to spot. You should be gaining information all the time. Not being able to look around as we drove between stops was disconcerting and contributed the the difficulty of interpretation.
  • Have a question to answer in your mind – this gives the brain a guide when prioritising fixations
  • Make sure you can see properly – sunglasses and hats get distracting direct sunlight out of your eyes
  • PUT NOTEPAD AND PENCIL AWAY – just look. Sit there and just look and think. Then when you have understood more, start to draw.
  • When you see a feature you think might be interesting, feed it into your starting question: This is a bush… Does the bush give me evidence of recent tectonic activity? No. Stop looking at bushes.
  • Relate your stops to other useful resources – maps and satellite images. In the last example, it would be enlightening to see the broad and shallow valley atop the cliff.
  • If you find yourself looking all over the place – STOP. Start looking for lines and colour changes – i.e. topography and lithology changes. What are the key features? List them if you need to, even if you don’t understand their relevance.

Remember that your eyes don’t have a brain of their own. They are guided by what you know. If you don’t know anything, your eyes are not much use to you!

Thanks so much for reading! Please feel free to comment below!

#22 – Workflow for dynamic graphics


Screen Shot 2013-08-19 at 15.51.50

OK, so I’m a massive advocate of using LaTeX to create text documents for one main reason – they’re dynamic. If you change something in the source, it’s automatically changed in the document. This is why when I create R scripts for plots, I save them and the outputs in the LaTeX folder for the document. When I change the plot code, I don’t have to go make a duplicate in the LaTeX folder.

Anyway, you can build in the same workflow into your graphics design. If you’re making a poster or other graphic, simply “place” images into your document rather than copying and pasting them. Every time you change the original, it’s updated in graphic you’ve made. I’ve been doing this recently for my post for the Goldschmidt conference, and it’s been a real time saver since my data has remained dynamic right up until the day of printing!

#21 – Find significant relationships in data with a CoCo Matrix


Screen Shot 2013-08-19 at 08.47.17

The CoCo Matrix (correlation coefficient matrix) is a script for R that takes a table headed with multiple variables and calculates the correlation coefficients between each of the variables, determines which are statistically significant, and represents them visually in a grid-plot. I created the CoCo Matrix to cross correlate a table with a large number of variables to quickly assess where important correlations could be found.

Screen Shot 2013-08-19 at 08.47.27

Using the CoCo Matrix

The R file can be downloaded here or copied from the textbox at the end of this post.

  1. If you know the number of samples in your dataset (n) then degrees of freedom (df) = n-2. Use this table to find the R value above which significant values lie. In the code, at the top you should change the value of “p” as per the value you just looked up. If you don’t know the value for n then run the code once and type “n” into the console.
  2. If you want, customise the colours in the customisation area of the code
  3. Run the code. A dialogue box will request a file. Alternatively replace the code to direct to the file you want to use.
  4. Voila!

This is a very rough script I wrote, and I intend to make it a lot better at some point when I have the time. If you have any suggestions  for improvements then please comment below or get in touch with me.

# CoCo Matrix version 1.0
# Written by Darren J. Wilkinson
# wilkinsondarren.wordpress.com
# d.j.wilkinson@ed.ac.uk
#
# The "CoCo Matrix" visualises the correlation coefficients for a given set of data.
# Like-Like correlations are given NA values (e.g. Height vs Height = NA). For the moment
# duplicates such as Height vs. Weight and Weight vs. Height remain. At some point I'll 
# provide an update that removes duplicates like that.
#
# Please feel free to edit the code, and if you make any improvements please let me know
# either on wilkinsondarren.wordpress.com or send me an email at d.j.wilkinson@ed.ac.uk

# Packages -------
library (cwhmisc)
library (ggplot2)
library (grid)
library (scales)
# ----------------

# Plot Customisation ----------------------------------------------------------
# (for good colour suggestions visit colourlovers.com)
col.significant = "#556270"			# Colour used for significant correlations
col.notsignificant = "lightgrey"		# Colour used for non-significant correlations
col.na = "white"						# Colour used for NA values
e1 = c("nb", "ta", "ba", "rb", "hf", "zr", "yb", "y", "th", "u")   #  p) {s = "Significant"}
		if (temp < p) {s = "Not Significant"}
		if (temp == 1) {s = NA}
		if (temp == 1) {temp = NA}
		results[h,i] = temp
		plot.data[r,4] = s
		plot.data[r,3] = temp
		plot.data[r,2] = h
		plot.data[r,1] = i
	}

}

# Open new quartz window
dev.new (
	width = 12, 
	height = 9
	)

# Plot the matrix
ggplot (data = plot.data, aes (x = x, y = y)) + 

geom_point (aes (colour = sig), size = 20) + 

scale_x_continuous (labels = e1, name = "", breaks = c(1:n.e1)) +

scale_y_continuous (labels = e1, name = "", breaks = c(1:n.e1)) +

scale_colour_manual (values = c(col.notsignificant, col.significant, col.na)) +

labs (title = "CoCo Matrix v1.0")+

theme (
	plot.title = element_text (vjust = 3, size = 20, colour = "black"), #plot title
	plot.margin = unit (c(3, 3, 3, 3), "lines"), #adjust the margins of the entire plot
	plot.background = element_rect (fill = "white", colour = "black"),
	panel.border = element_rect (colour = "black", fill = F, size = 1), #change the colour of the axes to black
	panel.grid.major = element_blank (), # remove major grid
	panel.grid.minor = element_blank (),  # remove minor grid
	panel.background = element_rect (fill = "white"), #makes the background transparent (white) NEEDED FOR INSIDE TICKS
	legend.background = element_rect (colour = "black", size = 0.5, fill = "white"),
	legend.justification = c(0, 0),
	#legend.position = c(0, 0), # put the legend INSIDE the plot area
	legend.key = element_blank (), # switch off the rectangle around symbols in the legend
	legend.box.just = "bottom",
	legend.box = "horizontal",
	legend.title = element_blank (), # switch off the legend title
	legend.text = element_text (size = 15, colour = "black"), #sets the attributes of the legend text#
	axis.title.x = element_text (vjust = -2, size = 20, colour = "black"), #change the axis title
	axis.title.y = element_text (vjust = -0.1, angle = 90, size = 20, colour = "black"), #change the axis title
	axis.text.x = element_text (size = 17, vjust = -0.25, colour = "black"), #change the axis label font attributes
	axis.text.y = element_text (size = 17, hjust = 1, colour = "black"), #change the axis label font attributes#
	axis.ticks = element_line (colour = "black", size = 0.5), #sets the thickness and colour of axis ticks
	axis.ticks.length = unit(-0.25 , "cm"), #setting a negative length plots inside, but background must be FALSE colour
	axis.ticks.margin = unit(0.5, "cm") # the margin between the ticks and the text
	)

# Print data tables in the console
results
plot.data

#19 Update: Should you trust your digital compass?


20130623-233955.jpg

In the wake of my last post on digital compass clinometer functions, I was left feeling rather dissatisfied that I had found out the whole truth about the accuracy of digital compasses found in most modern smartphones and tablets. I set aside half an hour over the weekend to gather some data to see for myself what was the case.

Here I present a very rough experiment using:
1 x iPhone (built-in compass app)
2 x Analogue compass clinometers (1 x Suunto MC-2, 1 x Silva type-15)

Data Gathering
Here we are testing the ability to measure magnetic north, so any readings must reflect that. The analogue compasses used are “working” compasses so have had the baseplate correction set to zero. The iPhone compass was also set to measure magnetic north.

To ensure meaningful data, both compasses must always be measuring the same direction of travel, and must not be too close as to magnetically interfere with each other. To achieve this I used a 130 mm wide piece of plastic with parallel sides (an empty DVD dox) to separate the two devices, whilst ensuring they are pointed in the same direction. I also used the right and side of the iPhone as this side has no buttons, and thus the straight side of the device should ensure the “top” of the phone is in the correct line.

I also needed to make sure that there were no significant external magnetic interferences. Since it was pissing it down with rain outside, I opted to do it in the centre of a large room. There were no metal/electronic objects within 2m in a horizontal radius. Assuming there is no large neodymium magnet under my floorboards (?) then I also assumed there would be no interference from above or below. I was not wearing a watch, necklace, bangle/bracelet, earrings or any other piercings, or chastity belt on my person that could also interfere.

20130623-234452.jpg

On a side note, I am very confident that compass 1 (an by association, compass 2) is indeed accurate. This is because only a matter of weeks ago I used the compass on fieldwork and on a following hiking holiday where it was used countless times for triangulation. Most of the time I was showing students how to triangulate, locating ourselves on a known location on a map. These positions were also verified by a high-accuracy mobile GPS. In short, if my compass was inaccurate or damaged, I would be well aware of the fact. Interestingly, whilst on that fieldwork a colleague of mine discovered she had fallen victim to the phenomenon whereby the magnetic polarity of your compass becomes reversed due to prolonged proximity to a smartphone.

Data was gathered in two job-lots. Using firstly compass 1 (C1, Sunnto) and then compass 2 (C2, Sylva), I sequentially adjusted the baseplate in increments of 10 degrees from 0 to 180. I had the digital and analogue compass on my lap, separated by the plastic rectangle. Sat on my swivel chair, I then shwiffled (shuffled and swivelled) round until the magnetic needle of my compass overlay the north arrow on the baseplate as well as I could get it. My lap provided much needed stability. I then recorded the reading on the iPhone as a counterpart measurement.

The Results
I must say that I was alarmed at the apparent error in the iPhone. The mean difference in reading was 4.8 degrees (s.d. 9.2) and 3.2 degree (s.d. 6.8) for compass 1 and compass 2 respectively. Interestingly, however, there seems to be a pattern in the data. Both compasses seem to show a shift in positive to negative/less positive error right around 90-110 degrees. While this pattern is more symmetrical for compass 1, both seem to have a similar negative excursion over approximately 70-80 degrees.

20130623-234818.jpg

Theoretically, any static disturbance in the room should cause a consistent error in the measurement as neither the needle, point in the room of measurement, nor the site of any potential disturbance actually move. Even if there was something in the room causing the interference then one would expect that the analogue compass would have been affected too, and thus there would be a homogenous error in the two measurements.

So what do these errors indicate? Is this error acceptable? Of course the answer is no. Sure, the internal magnetometer in the iPhone or any other device which uses a similar piece of hardware is probably good enough for giving you a bearing on a city street, but any remotely consequential use of that information would surely lead to undesirable results. Yet, although the vast majority of people out on mountainsides around the world are unlikely to be sorts of individuals who would use an iPhone for navigation, my concern surrounds the use of “smart” devices as geological tools. Indeed in my last post I was very positive about the app GeoId. That’s not to say that the app itself is not useful. With the prospect that the app’s data-gathering capabilities is hampered by hardware accuracy, it seems like the app GeoId is probably more useful and reliable if you were to collect data with an analogue compass clinometer and then input it into the program. What a laborious and time-consuming task that would be. A task that negates the entire convenience and quirkiness of the app. If you were going to sit there for a couple of hours and input that data into something, why wouldn’t you rather input it into a piece of software such as those discussed in this blog post on structuralgeology.org.

I encourage you to gather some data yourself and let me know what you find. I’m a big advocate of using technology wherever possible, especially in geology, so I’m really interested in knowing more about the limitations of such technology.

# 18 – Quick Review of GeoID and Digital Compass Clinometers


20130613-225359.jpg

GeoID is a smart device app (available in iTunes, Google play, and CNET) which is a powerful tool for the field geologist, from student to experienced professional. It combines a variety of useful functions from a compass clinometer to plotting and analysis of 3D planes and poles. In this post I’m going to review some key features of the app, after having field tested it over two field trips in the North West Highlands of Scotland.

Background on the App

20130613-212449.jpg

GeoID was developed by Jin Son, an independent developer and Energy Systems Engineering student based at Seoul National University. I’m not sure how long ago he developed the app, but it has to have been around more more than 12 months. It has not been rated at all in iTunes, and has had only a few ratings in Google’s App Store (4/5 stars).

Compass Clinometer Feature
There are already a range of apps out there that can effectively replace your traditional compass clinometer (e.g. GeoCompass, Strike and Dip), but none I have seen have the utility and design to match GeoID.

But the question to ask here is not which app looks better, but rather is a compass clinometer on your phone/tablet more accurate than a traditional compass-clino? Well the accuracy, and indeed precision, of the digital compass and digital clinometer is the product of two things: the hardware, and the coding. The compass part of the software uses both the in-built magnetometer AND the accelerometer. The magnetometer measures a heading from the Earth’s magnetic field, and combines this with data from the accelerometer on the orientation of the phone, to give you a direction of travel on your screen. Most compass applications on smart devices provide up-to-date corrections for magnetic vs. true north, so this shouldn’t be an issue with well coded apps. The general, better-informed, consensus in many online forums is that the accuracy of newer digital compasses is negligible compared with traditional needle compasses. This is based on direct comparisons between digital and analogue compasses reported online, and conducted by myself.

The clinometer seems to be more precise than analogue clinometers in most compass-clino’s. In a blog post I found, the iPhone’s clinometer was compared, by a carpenter, with a tilt box (a common, high precision clinometer used in many professional workshops). He wrote:

“Tried Clinometer last night and compared it to my tilt box, its most likely as accurate as the tiltbox – I used it as a reference to find out if two parts of a railing I am building were co-planar, the difference between the two planes was within 0.1 degrees between the tiltbox and the iphone app. On an absolute scale (i.e. I didn’t zero the reading on the first plane before moving it to the second plane), the two readings were within 0.1 degrees of each other, that’s probably as accurate as it gets in my shop.”

So it seems that the compass clinometer on a smartphone may well be a healthy rival to the conventional analogue device. Its power multiplied by the ease and speed with which it can make readings (which is not always a benefit, as I’ll discuss later).

GeoID further increases the power through clever coding. Dip/strike or Dip/Direction readings can be averaged over a given time interval to make readings more stable and precise. So when recording a surface there is small delay before a “stable” reading can be taken. If you record a reading before it is stable you are warned with a sound.

20130613-222054.jpg

Data recording & real time plotting
So, the compass clinometer is a quick, accurate and powerful tool. On top of that, in GeoID the compass clinometer button allows you to quickly gather and record lots of structural data on poles and planes, all whilst seeing it plotted in real time onto a stereonet of your choice. Plus, if to have GPS in your device, each datapoint is geotagged!

20130613-225359.jpg

I tested the app out on a thrust-related fold in Durness limestone on the shore of Loch Assynt, North West Highlands of Scotland. In under an hour I was able to collect 300 dip and dip-direction measurements. Although a really nice set of data, I noticed that the ability to gather such a large volume of data must be used with care. The limestone here is heavily carstified, and so measuring a large number of surfaces resulted in quite a large scatter on each of the fold limbs. Arguably, had GeoID an in-built averaging function this would have been less of a problem, but alas it was.

Once the data is collected, you can see the stereonet full-screen and change from equal-area to equal-angle projections. One thing that would be really useful here would be the ability to have layers of data so that planes may be fitted to each of the limbs to come up with a mean plane per limb, such that you could read off things such as the mean plunge and plunge direction.

I had also given my iPhone to another geologist to gather some extra data. Although smaller, it could be placed on a clipboard to average the bedding plane surfaces. The app allows you to then share the data via email, bluetooth, or export to file, however there is no ability to collaborate on the same project, so I was thus unable to amalgamate the two datasets.

I really like the GeoID app in its simplest form, as a tool for gathering dip/direction data quickly and accurately. I think that it has a lot of potential on the analytical side of things, although I appreciate the complexity in implementing such features into the program. What I think this app needs is more users, more rafters, and for us to give some constructive feedback to Jin Son on how he can make this app even more awesome!

Sypnosis
Good
– Compass clinometer is incredibly useful and accurate
– Ability to quickly gather and store pole/plane structural data
– Real-time plotting of data allows on-the-fly interpretation
– Very user friendly
– Easily send/export data

Bad
– Limited to one layer of data
– No structural analysis relevant to folding etc
– Cannot collaborate on, or amalgamate projects

#13 Mapping in R: Representing geospatial data together with ggplot


homies1

I have been trawling around for a while now trying to find a simple and understandable way of representing geospatial data in R, whilst retaining the ability to manipulate the visualisation in ggplot. After much searching I came across some articles which got me to a working product only after a lot of ball ache. All the coding is done in R, so if you don’t know what it is click here. I keep the code simple, mainly because I don’t need it to be more complex for my purposes, but it also helps newbies like me learn the syntax faster.

GGMAP is a package that was developed by David Kahle and Hadley Wickham (Hadley being the guy behind ggplot2). If you want more detail see David’s slides from the 8th International R User Conference.

 
1.0 Fetching a Map

Maps may be brought into R from a number of sources, the two main ones are GoogleMaps and OpenStreetMap. The code needed to fetch the map is slightly different depending on where you want the data from. Below are some examples:

 
libary (ggmap) 

ggmap(
	get_googlemap(
		center=c(-3.17486, 55.92284), #Long/lat of centre, or "Edinburgh"
		zoom=14, 
		maptype='satellite', #also hybrid/terrain/roadmap
		scale = 2), #resolution scaling, 1 (low) or 2 (high)
		size = c(600, 600), #size of the image to grab
		extent='device', #can also be "normal" etc
		darken = 0) #you can dim the map when plotting on top

ggsave ("/Users/s0679701/Desktop/map.png", dpi = 200) #this saves the output to a file

This outputs the following files:

maptype = "satellite"

maptype = “satellite”

maptype = "roadmap"

maptype = “roadmap”

maptype = "terrain"

maptype = “terrain”

We can also obtain a map from OpenStreetMap:

libary (ggmap) 

ggmap(
	get_openstreetmap (
	bbox = c(-3.16518, 55.91899, -3.18473, 55.92716), 
	format = "png"
	),

ggsave ("/Users/s0679701/Desktop/map.png", dpi = 200) #this saves the output to a file

You may receive the following error:

 Error: map grabbing failed - see details in ?get_openstreetmap.
In addition: Warning message:
In download.file(url, destfile = destfile, quiet = !messaging, mode = "wb") :
  cannot open: HTTP status was '503 Service Unavailable'

This is because the OpenMapServer has issues, and so you just need to be lucky! Hence why there is no OpenStreetMap for this example…. yet.

 
2.0 Plotting on a Map

You can plot any [x,y, +/- z] information you’d like on top of a ggmap, so long as x and y correspond to longitudes and latitudes within the bounds of the map you have fetched. To plot on top of the map you must first make your map a variable and add a geom layer to it. Here is an example:

libary (ggmap) 

#Generate some data
long = c(-3.17904, -3.17765, -3.17486, -3.17183)
lat = c(55.92432, 55.92353, 55.92284, 55.92174)
who = c("Darren", "Rachel", "Johannes", "Romesh")
data = data.frame (long, lat, who)

map = ggmap(
	get_googlemap(
		center=c(-3.17486, 55.92284), 
		zoom=16, 
		maptype='hybrid', 
		scale = 2), 

		size = c(600, 600),
		extent='normal', 
		darken = 0)

map + geom_point (
		data = data,
		aes (
			x = long, 
			y = lat, 
			fill = factor (who)
			), 
		pch = 21, 
		colour = "white", 
		size = 6
		) +

	scale_fill_brewer (palette = "Set1", name = "Homies") +

	#for more info on these type ?theme()	
	theme ( 
		legend.position = c(0.05, 0.05), # put the legend INSIDE the plot area
		legend.justification = c(0, 0),
		legend.background = element_rect(colour = F, fill = "white"),
		legend.key = element_rect (fill = F, colour = F),
		panel.grid.major = element_blank (), # remove major grid
		panel.grid.minor = element_blank (),  # remove minor grid
		axis.text = element_blank (), 
		axis.title = element_blank (),
		axis.ticks = element_blank ()
		) 

ggsave ("/Users/s0679701/Desktop/map.png", dpi = 200)

homies1homies2

This simple code should be enough to get you going making your own plots. If you have any questions about this code or your own, then please don’t hesitate with getting in touch via the comments below.

Happy Mapping!

#9 Thin Section Photomicrographs Using a Mobile Phone: Tips and Tricks


Like many of my colleagues, I have a lot of thin sections of many different rocks to look at. But sharing what you observe down the microscope is not as easy and convenient a process as it could, or indeed should be. When looking at sections, I make notes etc directly on my computer, and therefore I wanted the convenience of sitting at my computer rather than at a station in the microscope laboratory which is in a different part of the building.

Recently, the quality of the lenses and sensors on mobile phones has shot through the roof, and most modern smartphones now come with a pretty decent camera. Not to mention that on the same device you have the accessibility to many apps, including e-mail, via which you can share and send your images to other devices. Much easier than removing a memory card and inserting it into your laptop!

Hot To

Cover the other eyepiece to prevent entry of light from your surroundings. Then use both hands to offer the camera up to the eyepiece whilst sealing out as much light as you can.

1. The logistics of taking the photo with your camera.

Step 1
Locate the field of view you wish to photograph. Make sure that the image is in focus. You can of course take the image through either eyepiece, but I prefer using the one without the crosshairs. Make sure this individual eyepiece is in focus.

Step 2
Cover the other eyepiece to stop reflections. Light doesn’t only leave an eyepiece, it can also enter it. Light entering the open eyepiece will bounce around the optics and inevitably exit through the one that you want to take an image through causing nasty artefacts.

Step 3
If you are right handed, hold the phone in your right hand in portrait mode. It’s easier to handle this way, and the “take photo” is probably better accessed by your thumb/finger this way too. Using your left hand, shield the top and sides of the phone. This also allows you to stabilise and manoeuvre the phone into the desired position.

Step 4
Repositioning the camera takes a bit of practice but you’ll get the hang of it in no time. Firstly you want the lens vaguely in front of, and perpendicular to the image coming out of the eyepiece. You should see some sort of image appear on your screen. It will more than likely be overexposed and/or blurred. Once this image is central, using your left hand as a steady guide, adjust the spacing between your phone and the eyepiece. You are not focusing the image here, you are ensuring that the entire field of view is projected onto the camera lens equally and entirely. Too far away or too close and it will look like a small spotlight is illuminating part of the field of view, the rest of it is in darkness. As you move closer to the ideal distance, that spotlight will become large until “BAM!” suddenly the whole field of view is illuminated and you can take your picture once the camera has focused. This ideal distance is quite a narrow zone, so make sure you steady your hands and make gentle adjustments until the image is right.

The image shows what you expect to see at various distances from the eyepiece. You want to aim for full illumination which is in the middle of the range.

2. Some Common Issues
Dark crescents around the edge of the image – You are either too far away or too close to the eyepiece
Dark spots or areas inside otherwise well illuminated field of view – You phone is more than likely not perpendicular to the eyepiece, so adjust the attitude of the phone relative to the eyepiece.
Colour is off (too warm) – It’s likely the phone is reducing the exposure too much, so try reducing the diaphragm (or power of light) into the microscope. The former will increase the relief of the mineral boundaries making them more defined.
Camera won’t focus – Limitation of your camera phone and/or it’s camera software.

Reducing the diaphragm on the microscope gives better colour reproduction for some camera phones. It also increases the apparent relief of the minerals.

3. Things to do to increase quality
– Particularly when shooting in plain polarised or simple plain light, reducing the diaphragm aperture on the microscope will increase the image quality and colour reproduction.
– Make sure that you get a good light seal on both eyepieces.
– Make sure pin-sharp focus is achieved.
– Better microscopes = better images, better camera phones = better images.

4. Post-processing
If you want to really get the best quality image you can from your phone, I recommend that you take the following post-processing steps. I personally use photoshop, but most image editing packages will achieve the same results.

Step 1 – Adjust the levels. Using the histogram, move the left slider to either: A) to be under the peak of the blacks; or B) to exclude the black peak. Use whichever gives the best result for that image. Then adjust the righthand slider for whites to the left to achieve the desired brightness. You can also adjust the middle slider (contrast) to suit.

The levels for the image before they were adjusted.

Levels for the image after adjustment.

Comparison of the image before and after the levels were adjusted as described.

Step 2 – Apply Sharpening

This may or may not be necessary depending on your image. It’s always worth trying it to see. In photoshop, go to Filters > Sharpen > Unsharp Mask. I applied a strength of 46 % and a radius of 2.1 pixels. This will vary for each image. As a general rule I like to keep the strength slider higher than the radius slider to avoid black halos.

The unsharp mask in Photoshop is a very powerful tool.

It may not seem like your image needs sharpening, but after a unsharp mask is applied the results are stunning.

Step 3 (Optional)

If the colour balance is not completely right you can edit the colour balance manually. Go to Image > Adjustments > Colour Balance (or Cmd + B). This image, however, does not require it.

Final comparison of images. The only thing that has been done is a levels adjustment and sharpening applied.

5. Interconnectivity

Once great way of taking photomicrographs and keeping them together with any observations you may have is to use Evernote. Evernote is a great note taking application for your PC, Mac, Unix Machine, Mobile phone. iPad or tablet. Everything syncs wirelessly over the internet. I can write some notes on my computer about the slide I am looking at, and when I spot something interesting I can open up the Evernote app on my phone and take a photo which gets inserted into my notes. A few seconds later the photo automatically appears in the notes of my computer screen without the need for me to tell it to do anything! Truly a great tool for the geologist!

Evernote is a very useful tool for anyone who takes notes with any kind of media and/or the need to access the notes on a number of different devices.