Eye Tracking Geologists in the Field


_MG_0822

Abstract: The history of eye-movement research extends back at least to 1794, when Charles Darwin’s grandfather, Erasmus Darwin, published “Zoonomia” which included descriptions of eye movements due to self-motion. For the next 200 years eye tracking research was to been confined to the laboratory. That all changed when Michael Land built the first wearable eyetracker at the University of Sussex and published a seminal paper  entitled “Where we look when we steer”. Inspired by Land’s work, a group cognitive scientists, computer scientists, computer engineers and geologists have been working to extend knowledge of how we actually use vision in the real world. I was fortunate enough to participate in this ground-breaking experiment earlier this year, and I wanted to share the experience with the geology community! In this blog article I will give a brief summary of the project I was involved in and the things I learned that can really help you be a better field geologist!

Untitled-1

How do we look at a scene?
Most animals do not simply gaze at a scene – we simply don’t have the necessary resources to take in every single thing in front of us. Instead, our brains have developed a cunning cheat system whereby we target important aspects of a field of view and build up an image in our mind from the important selected areas. The bandwidth and processing power of our eye-brain circuitry need therefore only deal with a small portion of the image at once, and lots of interpolation can be done at source in the brain. This can be illustrated in the figure below:

Fig.1 Unlike birds, most animals build an image of the field of view before them by targeting important areas sequentially.

Fig.1 Unlike birds, most animals build an image of the field of view before them by targeting important areas sequentially.

Figure 1a shows what a field of view may “look” like to our brain before any fixations are made. Using our limited understanding of this new scene our brain may then target key areas of the field (fig. 1b-c) of view for further investigation. Fixations of varying durations and acquisition order are made, allowing our brain to, in real time, develop a better understanding of what is before the eyes. This process continues and continues the longer we look at a scene.

This process has been proven to be heavily influenced by the individual. The specific way we approach deconstructing this scene depends on a whole host of factors. This could be gender, intelligence, familiarity with the surroundings, experience with the scene, state of mind (distraction, tiredness) -basically any environmental factor that can affect the brain may change the way our brain and eyes go about investigating this scene. We may therefore use eye-tracking to see how different categories of people look at the same scene. To do this, eye-tracking software can gather the following information:

  • Fixation Points
    • Location, duration, sequence, saccade types
  • Pupil Dilation/constriction
  • Blinks

Eye tracking technology is therefore a powerful tool in cognitive research that may be used to access the brain. Here are some examples of common applications of eye-tracking:

  • Commercial Applications
    • User interface design
    • Marketing and product placement
    • Targeted marketing
  • Primate/infant/adult/geriatric research
  • Safety
    • Fatigue detection
    • Concentration detection
  • Sports Training
    • Motorsports
    • Ball sports
  • Accessibility
    • Communication tools for disabled people
    • Advanced methods for computer-human interaction
  • Medical
    • Laser Eye Surgery
    • fMRI, MEG, EEG

Eye-tracking is of particular interest to the commercial sector. Interestingly, commercial applications exploit the brain and the eyes: for instance, there’s a proven reason that advertisements at the top of the google search results page cost the most…

Heatmap of fixations of a google seach result page. (http://highongoogle.net/images/seo-bolton-google-eyetracking.jpg)

So what does this have to do with geology? Well geologists are just one of many communities target by this new research. As mentioned in the abstract, eye tracking technology has only recently become mobile enough to be taken out of the laboratory and into the real world. Geologists are often confronted with new scenes in the field and must use their eyes and brains to really understand what it is that they are looking at. Therefore, by taking amateur and professional geologists into the field and conducting eye-tracking experiments, we can gain insights into differences in ways professionals and novices approach visual problems.

The Study of Geologists in the Field
A joint study between Rochester University and Rochester Institute of Technology (RIT) has for the past five years using a hefty NSF grant ($2m) to research geologists in the field. Principal investigators include Robert Jacobs, Jeff Pelz, and John Tarduno. The beautiful wearable eye-trackers were developed by Jason Babcock. The research has been conducted in a variety of environments, but the part that I took part in was a 10-day field excursion to the Western USA to visit some truly amazing geological localities.

First, lets take a look at the technology that I was wearing for the 10 days, and how it worked:

Backpack – Contains 1 Apple Mac Book Air, the powerhouse of the mobile eye-tracking unit and home to the custom-built processing software.

_MG_0175

Head Unit – Contains a front-facing camera above the right eye, and an eye-facing camera and IR bulb for filming eye movements. Also we had to wear a ridiculously large sombrero to shield the cameras from direct sunlight.

_MG_0187

Network – All the devices were connected to a local network, the router for which was being carried around here by Jeff Pelz. The custom software meant that he could use his iPhone to get real-time footage from any participants front-facing or eye-facing cameras at any time. This was useful for adjustments and also keeping participants looking where they should during experiments.

_MG_0678

The Vans – The network extended back to the vans too. This van was the tech-van and contained massive hard drives for backing up all the data. Each evening, the RIT folk would sit in this van and process/backup the data.

_MG_0178

Calibration – Each time we wore the mobile eye-tracking units, we had to complete a series of calibration exercises such that our fixations could be mapped onto the video of our front facing cameras. This was done by standing a few meters away from a calibration spot (as seen below on the back of Tomaso’s notebook) and rolling the head whilst keeping our eyes fixated on the spot. As if we didn’t look stupid enough!

_MG_9800

Gigapan Images – Meanwhile, other members of the tech team were taking panoramic GigaPan images of each scene onto which the tracking data could be overlaid.

_MG_9517

A typical stop would work something like this (somewhat similar to a roadside execution…):

  1. On approach to the locality we were told via radio not to look at the surroundings
  2. We’d get out of the vans, get the trackers on and calibrate
  3. Someone would then lead us to the viewing point – all the while we had to look down at our feet
  4. At the viewing point we would be given a question to address in our minds – often something like “What is the evidence this is a tectonically active area?”
  5. We were then told to look up and analyse the scene in silence for around a minute
  6. After the allocated time was up we then had to answer questions about what we had observed
  7. Following questions we would then be given an explanatory guide to the geology by John Tarduno

The Results
Unfortunately, I wasn’t allowed to know the conclusions of the study thus far during the trip – this would ruin the experiment. Similarly, if you ever think that you are going to have the chance to take part in a similar study – STOP READING NOW. Despite the lack of information I was privy to, I did manage to get some of the conclusions of the study from the authors before I left. What I am allowed to divulge is pretty intuitive, but may help you and even your students learn to analyse scenes better.

Put simply, professional geologists make fewer, longer and more systematic fixations when looking at scenes of interest. This makes sense – your brain targets the information in a scene that is going to tell you the most useful information to address the problem in mind: i.e. what’s the geological history of this outcrop? Conversely, the students in this experiment made lots and lots of short-lived fixations all over the scene in random places as they tried to search for something they might understand.  Here are some visualisations of the differences between students and experts.

_MG_0689

The locality above is in Owens Valley, CA (36.60594N, 118.07511W) – fault scarp that resulted from an earthquake in 1872. It is has ca. 15ft of right lateral slip, and 8ft of vertical slip. As you can see from the image above, the expert realises that he/she is faced with a recent fault escarpment. The expert analyses the break in slope and the presence of the boulders on the escarpment. The novice, however, sees no significant feature in the dusty ground and looks to the mountains in the distance and local hills to see if there is anything obvious. The novice’s path of fixations is chaotic and short, returning to the same points briefly for no reason, then heading elsewhere.

Untitled_Panorama4

The above scene is of a hanging valley located in Yosemite National Park (37.71769N, 119.64846W). The hanging valley represents the valley of a tributary glacier, and is now drainage for water. The cliff face represents the side of the valley calved by the main glacier – this main valley was deeper and so now the tributary valley “hangs”. The expert fixations for this spot are therefore right on the money. The expert looks at the slight u-shape to the hanging valley, acknowledges that there is still drainage here (reinforcing this is in some way a small valley), and the expert also notices the steep valley sides – likely caused by glacial activity. The novice however is distracted by the pretty rainbow and waterfall, and fails to see any real significant features in what to them is just a cliff face.

Summary
I had a great time in the USA taking part of the study, and I learned some new and reinforced some old really valuable field skills:

  • Make sure you do your research on the geological context of the area – we weren’t allowed to do much reading at all, and it really makes it tough when you are just dumped in a completely new tectonic/geological setting with no warning
  • Keep your eyes open all the time – even when driving from spot to spot. You should be gaining information all the time. Not being able to look around as we drove between stops was disconcerting and contributed the the difficulty of interpretation.
  • Have a question to answer in your mind – this gives the brain a guide when prioritising fixations
  • Make sure you can see properly – sunglasses and hats get distracting direct sunlight out of your eyes
  • PUT NOTEPAD AND PENCIL AWAY – just look. Sit there and just look and think. Then when you have understood more, start to draw.
  • When you see a feature you think might be interesting, feed it into your starting question: This is a bush… Does the bush give me evidence of recent tectonic activity? No. Stop looking at bushes.
  • Relate your stops to other useful resources – maps and satellite images. In the last example, it would be enlightening to see the broad and shallow valley atop the cliff.
  • If you find yourself looking all over the place – STOP. Start looking for lines and colour changes – i.e. topography and lithology changes. What are the key features? List them if you need to, even if you don’t understand their relevance.

Remember that your eyes don’t have a brain of their own. They are guided by what you know. If you don’t know anything, your eyes are not much use to you!

Thanks so much for reading! Please feel free to comment below!

#22 – Workflow for dynamic graphics


Screen Shot 2013-08-19 at 15.51.50

OK, so I’m a massive advocate of using LaTeX to create text documents for one main reason – they’re dynamic. If you change something in the source, it’s automatically changed in the document. This is why when I create R scripts for plots, I save them and the outputs in the LaTeX folder for the document. When I change the plot code, I don’t have to go make a duplicate in the LaTeX folder.

Anyway, you can build in the same workflow into your graphics design. If you’re making a poster or other graphic, simply “place” images into your document rather than copying and pasting them. Every time you change the original, it’s updated in graphic you’ve made. I’ve been doing this recently for my post for the Goldschmidt conference, and it’s been a real time saver since my data has remained dynamic right up until the day of printing!

#21 – Find significant relationships in data with a CoCo Matrix


Screen Shot 2013-08-19 at 08.47.17

The CoCo Matrix (correlation coefficient matrix) is a script for R that takes a table headed with multiple variables and calculates the correlation coefficients between each of the variables, determines which are statistically significant, and represents them visually in a grid-plot. I created the CoCo Matrix to cross correlate a table with a large number of variables to quickly assess where important correlations could be found.

Screen Shot 2013-08-19 at 08.47.27

Using the CoCo Matrix

The R file can be downloaded here or copied from the textbox at the end of this post.

  1. If you know the number of samples in your dataset (n) then degrees of freedom (df) = n-2. Use this table to find the R value above which significant values lie. In the code, at the top you should change the value of “p” as per the value you just looked up. If you don’t know the value for n then run the code once and type “n” into the console.
  2. If you want, customise the colours in the customisation area of the code
  3. Run the code. A dialogue box will request a file. Alternatively replace the code to direct to the file you want to use.
  4. Voila!

This is a very rough script I wrote, and I intend to make it a lot better at some point when I have the time. If you have any suggestions  for improvements then please comment below or get in touch with me.

# CoCo Matrix version 1.0
# Written by Darren J. Wilkinson
# wilkinsondarren.wordpress.com
# d.j.wilkinson@ed.ac.uk
#
# The "CoCo Matrix" visualises the correlation coefficients for a given set of data.
# Like-Like correlations are given NA values (e.g. Height vs Height = NA). For the moment
# duplicates such as Height vs. Weight and Weight vs. Height remain. At some point I'll 
# provide an update that removes duplicates like that.
#
# Please feel free to edit the code, and if you make any improvements please let me know
# either on wilkinsondarren.wordpress.com or send me an email at d.j.wilkinson@ed.ac.uk

# Packages -------
library (cwhmisc)
library (ggplot2)
library (grid)
library (scales)
# ----------------

# Plot Customisation ----------------------------------------------------------
# (for good colour suggestions visit colourlovers.com)
col.significant = "#556270"			# Colour used for significant correlations
col.notsignificant = "lightgrey"		# Colour used for non-significant correlations
col.na = "white"						# Colour used for NA values
e1 = c("nb", "ta", "ba", "rb", "hf", "zr", "yb", "y", "th", "u")   #  p) {s = "Significant"}
		if (temp < p) {s = "Not Significant"}
		if (temp == 1) {s = NA}
		if (temp == 1) {temp = NA}
		results[h,i] = temp
		plot.data[r,4] = s
		plot.data[r,3] = temp
		plot.data[r,2] = h
		plot.data[r,1] = i
	}

}

# Open new quartz window
dev.new (
	width = 12, 
	height = 9
	)

# Plot the matrix
ggplot (data = plot.data, aes (x = x, y = y)) + 

geom_point (aes (colour = sig), size = 20) + 

scale_x_continuous (labels = e1, name = "", breaks = c(1:n.e1)) +

scale_y_continuous (labels = e1, name = "", breaks = c(1:n.e1)) +

scale_colour_manual (values = c(col.notsignificant, col.significant, col.na)) +

labs (title = "CoCo Matrix v1.0")+

theme (
	plot.title = element_text (vjust = 3, size = 20, colour = "black"), #plot title
	plot.margin = unit (c(3, 3, 3, 3), "lines"), #adjust the margins of the entire plot
	plot.background = element_rect (fill = "white", colour = "black"),
	panel.border = element_rect (colour = "black", fill = F, size = 1), #change the colour of the axes to black
	panel.grid.major = element_blank (), # remove major grid
	panel.grid.minor = element_blank (),  # remove minor grid
	panel.background = element_rect (fill = "white"), #makes the background transparent (white) NEEDED FOR INSIDE TICKS
	legend.background = element_rect (colour = "black", size = 0.5, fill = "white"),
	legend.justification = c(0, 0),
	#legend.position = c(0, 0), # put the legend INSIDE the plot area
	legend.key = element_blank (), # switch off the rectangle around symbols in the legend
	legend.box.just = "bottom",
	legend.box = "horizontal",
	legend.title = element_blank (), # switch off the legend title
	legend.text = element_text (size = 15, colour = "black"), #sets the attributes of the legend text#
	axis.title.x = element_text (vjust = -2, size = 20, colour = "black"), #change the axis title
	axis.title.y = element_text (vjust = -0.1, angle = 90, size = 20, colour = "black"), #change the axis title
	axis.text.x = element_text (size = 17, vjust = -0.25, colour = "black"), #change the axis label font attributes
	axis.text.y = element_text (size = 17, hjust = 1, colour = "black"), #change the axis label font attributes#
	axis.ticks = element_line (colour = "black", size = 0.5), #sets the thickness and colour of axis ticks
	axis.ticks.length = unit(-0.25 , "cm"), #setting a negative length plots inside, but background must be FALSE colour
	axis.ticks.margin = unit(0.5, "cm") # the margin between the ticks and the text
	)

# Print data tables in the console
results
plot.data

#19 Update: Should you trust your digital compass?


20130623-233955.jpg

In the wake of my last post on digital compass clinometer functions, I was left feeling rather dissatisfied that I had found out the whole truth about the accuracy of digital compasses found in most modern smartphones and tablets. I set aside half an hour over the weekend to gather some data to see for myself what was the case.

Here I present a very rough experiment using:
1 x iPhone (built-in compass app)
2 x Analogue compass clinometers (1 x Suunto MC-2, 1 x Silva type-15)

Data Gathering
Here we are testing the ability to measure magnetic north, so any readings must reflect that. The analogue compasses used are “working” compasses so have had the baseplate correction set to zero. The iPhone compass was also set to measure magnetic north.

To ensure meaningful data, both compasses must always be measuring the same direction of travel, and must not be too close as to magnetically interfere with each other. To achieve this I used a 130 mm wide piece of plastic with parallel sides (an empty DVD dox) to separate the two devices, whilst ensuring they are pointed in the same direction. I also used the right and side of the iPhone as this side has no buttons, and thus the straight side of the device should ensure the “top” of the phone is in the correct line.

I also needed to make sure that there were no significant external magnetic interferences. Since it was pissing it down with rain outside, I opted to do it in the centre of a large room. There were no metal/electronic objects within 2m in a horizontal radius. Assuming there is no large neodymium magnet under my floorboards (?) then I also assumed there would be no interference from above or below. I was not wearing a watch, necklace, bangle/bracelet, earrings or any other piercings, or chastity belt on my person that could also interfere.

20130623-234452.jpg

On a side note, I am very confident that compass 1 (an by association, compass 2) is indeed accurate. This is because only a matter of weeks ago I used the compass on fieldwork and on a following hiking holiday where it was used countless times for triangulation. Most of the time I was showing students how to triangulate, locating ourselves on a known location on a map. These positions were also verified by a high-accuracy mobile GPS. In short, if my compass was inaccurate or damaged, I would be well aware of the fact. Interestingly, whilst on that fieldwork a colleague of mine discovered she had fallen victim to the phenomenon whereby the magnetic polarity of your compass becomes reversed due to prolonged proximity to a smartphone.

Data was gathered in two job-lots. Using firstly compass 1 (C1, Sunnto) and then compass 2 (C2, Sylva), I sequentially adjusted the baseplate in increments of 10 degrees from 0 to 180. I had the digital and analogue compass on my lap, separated by the plastic rectangle. Sat on my swivel chair, I then shwiffled (shuffled and swivelled) round until the magnetic needle of my compass overlay the north arrow on the baseplate as well as I could get it. My lap provided much needed stability. I then recorded the reading on the iPhone as a counterpart measurement.

The Results
I must say that I was alarmed at the apparent error in the iPhone. The mean difference in reading was 4.8 degrees (s.d. 9.2) and 3.2 degree (s.d. 6.8) for compass 1 and compass 2 respectively. Interestingly, however, there seems to be a pattern in the data. Both compasses seem to show a shift in positive to negative/less positive error right around 90-110 degrees. While this pattern is more symmetrical for compass 1, both seem to have a similar negative excursion over approximately 70-80 degrees.

20130623-234818.jpg

Theoretically, any static disturbance in the room should cause a consistent error in the measurement as neither the needle, point in the room of measurement, nor the site of any potential disturbance actually move. Even if there was something in the room causing the interference then one would expect that the analogue compass would have been affected too, and thus there would be a homogenous error in the two measurements.

So what do these errors indicate? Is this error acceptable? Of course the answer is no. Sure, the internal magnetometer in the iPhone or any other device which uses a similar piece of hardware is probably good enough for giving you a bearing on a city street, but any remotely consequential use of that information would surely lead to undesirable results. Yet, although the vast majority of people out on mountainsides around the world are unlikely to be sorts of individuals who would use an iPhone for navigation, my concern surrounds the use of “smart” devices as geological tools. Indeed in my last post I was very positive about the app GeoId. That’s not to say that the app itself is not useful. With the prospect that the app’s data-gathering capabilities is hampered by hardware accuracy, it seems like the app GeoId is probably more useful and reliable if you were to collect data with an analogue compass clinometer and then input it into the program. What a laborious and time-consuming task that would be. A task that negates the entire convenience and quirkiness of the app. If you were going to sit there for a couple of hours and input that data into something, why wouldn’t you rather input it into a piece of software such as those discussed in this blog post on structuralgeology.org.

I encourage you to gather some data yourself and let me know what you find. I’m a big advocate of using technology wherever possible, especially in geology, so I’m really interested in knowing more about the limitations of such technology.

# 18 – Quick Review of GeoID and Digital Compass Clinometers


20130613-225359.jpg

GeoID is a smart device app (available in iTunes, Google play, and CNET) which is a powerful tool for the field geologist, from student to experienced professional. It combines a variety of useful functions from a compass clinometer to plotting and analysis of 3D planes and poles. In this post I’m going to review some key features of the app, after having field tested it over two field trips in the North West Highlands of Scotland.

Background on the App

20130613-212449.jpg

GeoID was developed by Jin Son, an independent developer and Energy Systems Engineering student based at Seoul National University. I’m not sure how long ago he developed the app, but it has to have been around more more than 12 months. It has not been rated at all in iTunes, and has had only a few ratings in Google’s App Store (4/5 stars).

Compass Clinometer Feature
There are already a range of apps out there that can effectively replace your traditional compass clinometer (e.g. GeoCompass, Strike and Dip), but none I have seen have the utility and design to match GeoID.

But the question to ask here is not which app looks better, but rather is a compass clinometer on your phone/tablet more accurate than a traditional compass-clino? Well the accuracy, and indeed precision, of the digital compass and digital clinometer is the product of two things: the hardware, and the coding. The compass part of the software uses both the in-built magnetometer AND the accelerometer. The magnetometer measures a heading from the Earth’s magnetic field, and combines this with data from the accelerometer on the orientation of the phone, to give you a direction of travel on your screen. Most compass applications on smart devices provide up-to-date corrections for magnetic vs. true north, so this shouldn’t be an issue with well coded apps. The general, better-informed, consensus in many online forums is that the accuracy of newer digital compasses is negligible compared with traditional needle compasses. This is based on direct comparisons between digital and analogue compasses reported online, and conducted by myself.

The clinometer seems to be more precise than analogue clinometers in most compass-clino’s. In a blog post I found, the iPhone’s clinometer was compared, by a carpenter, with a tilt box (a common, high precision clinometer used in many professional workshops). He wrote:

“Tried Clinometer last night and compared it to my tilt box, its most likely as accurate as the tiltbox – I used it as a reference to find out if two parts of a railing I am building were co-planar, the difference between the two planes was within 0.1 degrees between the tiltbox and the iphone app. On an absolute scale (i.e. I didn’t zero the reading on the first plane before moving it to the second plane), the two readings were within 0.1 degrees of each other, that’s probably as accurate as it gets in my shop.”

So it seems that the compass clinometer on a smartphone may well be a healthy rival to the conventional analogue device. Its power multiplied by the ease and speed with which it can make readings (which is not always a benefit, as I’ll discuss later).

GeoID further increases the power through clever coding. Dip/strike or Dip/Direction readings can be averaged over a given time interval to make readings more stable and precise. So when recording a surface there is small delay before a “stable” reading can be taken. If you record a reading before it is stable you are warned with a sound.

20130613-222054.jpg

Data recording & real time plotting
So, the compass clinometer is a quick, accurate and powerful tool. On top of that, in GeoID the compass clinometer button allows you to quickly gather and record lots of structural data on poles and planes, all whilst seeing it plotted in real time onto a stereonet of your choice. Plus, if to have GPS in your device, each datapoint is geotagged!

20130613-225359.jpg

I tested the app out on a thrust-related fold in Durness limestone on the shore of Loch Assynt, North West Highlands of Scotland. In under an hour I was able to collect 300 dip and dip-direction measurements. Although a really nice set of data, I noticed that the ability to gather such a large volume of data must be used with care. The limestone here is heavily carstified, and so measuring a large number of surfaces resulted in quite a large scatter on each of the fold limbs. Arguably, had GeoID an in-built averaging function this would have been less of a problem, but alas it was.

Once the data is collected, you can see the stereonet full-screen and change from equal-area to equal-angle projections. One thing that would be really useful here would be the ability to have layers of data so that planes may be fitted to each of the limbs to come up with a mean plane per limb, such that you could read off things such as the mean plunge and plunge direction.

I had also given my iPhone to another geologist to gather some extra data. Although smaller, it could be placed on a clipboard to average the bedding plane surfaces. The app allows you to then share the data via email, bluetooth, or export to file, however there is no ability to collaborate on the same project, so I was thus unable to amalgamate the two datasets.

I really like the GeoID app in its simplest form, as a tool for gathering dip/direction data quickly and accurately. I think that it has a lot of potential on the analytical side of things, although I appreciate the complexity in implementing such features into the program. What I think this app needs is more users, more rafters, and for us to give some constructive feedback to Jin Son on how he can make this app even more awesome!

Sypnosis
Good
– Compass clinometer is incredibly useful and accurate
– Ability to quickly gather and store pole/plane structural data
– Real-time plotting of data allows on-the-fly interpretation
– Very user friendly
– Easily send/export data

Bad
– Limited to one layer of data
– No structural analysis relevant to folding etc
– Cannot collaborate on, or amalgamate projects

#16 Colour: A quick guide to its use in informative graphics


1.0 Introduction
The most fundamental employment of colour in [qualitatively/quantitatively] informative graphics is to allow the observer to easily distinguish elements of the information displayed. The strict definition of the term “colour” must be cast aside here, as just as useful are black, greys and white. Much of what will be discussed in this post may seem intuitive, but it is the guidelines and their employment in practice which unfortunately escapes the many.

The most important point to take away from this post is: colour used well can enhance and clarify visual information; and colour used badly will likely obscure and confuse.

2.0 Principals of Colour in design

2.1 Hue, value and chroma

Colour designers use three variables to describe any particular colour:

  1. Hue – Fig. 2.1 – The name of the colour (e.g. Red, or Blue)
  2. Value – Fig. 2.2 – The lightness or darkness of that colour
  3. Chroma – Fig. 2.3 – Equivalent to saturation (reducing to zero gives equivalent grey value)
huechart-01

2.1 – Hue values

2.2 – Value, or brightness.

chroma

2.3 – Chroma, or saturation.

2.2 Legibility

If something is legibleit is clear to read. Although this extends to literal clarity, here we’re talking about optical clarity to the human observer. Hue and chroma do not contribute much to legibility, it is the luminance contrast (or contrast in value) of the background and the foreground. Fig. 2.4 shows varying degrees of legibility, due to differences in contrast between the red and black text, and the background.

LEGIBILITY

2.4 – Legibility is a result of the difference in value (brightness).


Legibility, therefore can be achieved in a number of ways. Adequate contrast may be achieved in a either a monochromatic or multi-chromatic image. As far as monochromatic images are concerned (Fig. 2.5), the case is simple and value contrast is the only thing one needs to consider. For colour images (e.g. Fig. 2.7), in the strictest sense of the word, contrast surely gives us legibility, but unlike in monochromatic images, legibility is then not our only concern!

Print

2.5 – Monochromatic image achieving legibility through contrast in greys.

2.1

2.6 – Image shows high analogy of colours and low contrast.

2.2

2.7 – Image shows high contrast and two analogous colour groups.

3.0 Selecting a colour palette

3.1 Review of concepts

So by now you should understand the important colour principals should be employed in the following manner:

  • Select COLOUR to suite FUNCTION
    e.g. I’ll select a red hue for high temperatures and a blue hue for low ones. Intuitive, eh?
temperatures_city-01

3.1 – Colours here have been selected for intuitive function.

  • Use colour ANALOGY to delineate GROUPS
    e.g. Set A = Orange, Set B = Green. Each set may have different chroma values.
THEMPERATURES-2-01

3.2 – Here, the red and blue palettes have analogous variations of chroma (i.e. saturation).

  • Use CONTRAST to HIGHLIGHT
    i.e. New data vs. old data.
contrast-2-01

3.3 – Contrast here is used to highlight the “new” data over the “old” data.

  • Use CONTRAST for LEGIBILITY
    Often useful when labeling
Reducing the diaphragm on the microscope gives better colour reproduction for some camera phones. It also increases the apparent relief od the minerals.

3.4 – White text on a black background gives high contrast, which translates into high legibility.

In most situations where information is being displayed, it is best to limit your colour palette to two or three different hues. Using a limited hue palette, you are forced to increase variation by changing the value and chroma of your core hues. Examples showing theses limited palettes have already been discussed.

3.2 Colour Palettes vs. Symbols

Of course a lot of researchers need to display information in monochromatic images, due to constraints applied by the journal in which they wish to publish the data. The de facto choice for displaying multiple sets of information is to use different symbology (e.g. dashed/dotted lines, different point symbols). However, studies have shown that the human eye is only capable of understanding a maximum of 7 variables in any given set. This means that if you have more than seven symbols or line types, your plot is going to be hard to decipher. As discussed, the same applies to colour. So when it comes to symbols, it is also advisable to limit the number used to around three. This thinking is employed in the much used GGPLOT package for R. The package, written by Hadley Wickham limits the automatically assigned symbols to a plot (PCH values) to a maximum of seven.

3.3 Complex palettes

The idea of restricting your palette to only three hue values, or indeed the number of symbols used, is an ideal one. However, there are often many situations when you simply need to use more. More symbols or more colour? Well the advice from the experts is to use colour. It is much less work, and the mind is much better at distinguishing colours rather than a series of [more complex] shapes. So should you need to display 7 sets of data then colour is the way to go. Consensus on the ground is, that should you need to display more than 7 sets of data, then you need to rethink the way you are visualising your information.

3.4 Colour palette resources

Now you understand a lot more about using your colours, you could go out there and compose your own colour palettes… OR you could use any number of great websites to provide the palettes for you! AND if you use GGPLOT, the first of these (ColorBrewer) can be integrated with your plots! Remember, these palettes are not only great for plotting data, they are also great for providing colour schemes for posers and presentations!

4.0 A word on background colours

Some people prefer using background colours to their plots, commonly pale yellows or greys. The problem is that most colours (particularly preset colour palettes) are chosen to be printed on white paper, so it stands to reason that they should be displayed digitally on a white background too. Not only this, our brain is designed to constantly normalise our palette to a local reference for white. For example, if you’ve worn yellow ski goggles all day, when you take them off everything appears blue. This is also why when you take a photo indoors under a tungsten bulb, the image often comes out orange, more orange than you see it. This is because the camera’s white balance has not been set to match your eyes.

3.5 – Most camera users encounter white balance issues when shooting indoors. This is because the cameras white value is not set correctly. In the human brain, white balance is constantly being corrected so we would see the right hand image.

3.6 – Early computer screens didn’t have enough brightness to show black text on a white background. So the text had to be white and the background black to increase legibility.

Dark backgrounds are only ever recommended when the information is being displayed in a dark environment. The use of light text on a dark background reduces problems of legibility, allowing the viewer to see the information without being distracted by the background. The reason that early computer screens and slides from the earliest projection systems had dark backgrounds was because the displays weren’t that bright, and consequently needed to be used in rooms with reduced lighting. Now, however, projectors and screens are bright enough that this isn’t the case.

#15 Alkali Silica Template


Alkali Silica

Does what it says on the tin.

DOWNLOAD THE CODE

#------------------------------
#-------- INFORMATION ---------
#------------------------------
# Plotting points from Hugh
# Rallinson's "Using Geochemical
# Data" book. Code compiled by
# Darren J. Wilkinson,
# Grant Inst. Earth Science
# The University of Edinburgh
# d.j.wilkinson@ed.ac.uk
#------------------------------

# -------- CONTROLS ----------
y.max = 16
x.min = 35
x.max = 80
lab.size = 5
save = "/Users/s0679701/Desktop/"
filename = "test.png"
#------------------------------

# -------- LIBRARIES ----------
library (grid)
library (scales)
library (ggplot2)
#------------------------------

# -------- DON'T EDIT ----------
a = c(1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5, 6, 6, 6, 6, 7, 7, 7, 8, 8)
x = c(41.0,41.0,52.5,50.0,52.5,57.6,63.0,63.0,45.0,45.0,61.0,63.5,45.0,52.0,57.0,63.0,69.0,45.0,49.4,52.0,52.0, 48.4,53.0,57.0,57.0,69.0,69.0,76.6,41.0,45.0)
y = c(0.500,7.000,14.000,15.126,14.000,11.700,7.000,0.500,0.500,5.000,13.500,14.830,5.000,5.000,5.900, 7.000,8.000,9.400,7.300,5.000,0.500,11.500,9.300,5.900,0.500,12.500,8.000,0.500,3.000,3.000)
lines = data.frame (a, x, y)

b = c("Picro-Basalt", "Basalt", "Basaltic Andesite", "Andesite", "Dacite", "Basanite", "Trachy Basalt", "Basaltic Trachyandesite", "Trachyandesite", "Trachydacite", "Rhyolite", "Tephrite", "Phonotephrite", "Tephriphonolite", "Trachyte", "Foidite", "Phonolite")
c = c(1:17)
x = c(43, 48.5, 54.5, 60, 68, 43, 48.75, 53, 57.5, 65, 75, 46, 49, 53, 65, 45, 58)
y = c(2, 2.5, 3, 3.5, 4, 6, 5.5, 6.5, 8.5, 9, 8, 8, 9.5, 11.5, 12, 13, 14)
labels = data.frame (b, x, y)

x = c(39.8,65.5) #MacDonald (1968)
y = c(0.35, 9.7) #MacDonald (1968)
sub = data.frame (x, y)
#------------------------------

# -------- BEGIN PLOT ----------

ggplot (lines, aes (x=x, y=y)) +

# Field Boundaries
geom_line (
aes(linewidth = factor (a))
) +

# Alkaline-Tholeiitic Line
geom_line (
data = sub,
aes (x=x, y=y),
linetype = "longdash"
) +

# Field Labels
geom_text (
data = labels,
aes(x = x, y = y, label = b),
size = lab.size
) +

scale_x_continuous (
name = (expression(paste("SiO"["2"], " (wt. %)"))),
limits = c(x.min, x.max)
) +

scale_y_continuous (
name = (expression(paste("Na"["2"], "O + K"["2"], "O", " (wt. %)"))),
breaks = c(seq(0, y.max, 2)),
limits = c(0, y.max)
) +

theme (
plot.title = element_text (vjust = 3, size = 20), #plot title
plot.margin = unit (c(3, 3, 3, 3), "lines"), #adjust the margins of the entire plot
panel.border = element_rect (colour = "black", fill = F, size = 2), #change the colour of the axes to black
panel.grid.major = element_blank (), # remove major grid
panel.grid.minor = element_blank (), # remove minor grid
panel.background = element_rect (fill = "white"), #makes the background transparent (white) NEEDED FOR INSIDE TICKS
legend.background = element_rect (fill = "white"),
legend.justification=c(1, 1),
legend.position = c(1, 1), # put the legend INSIDE the plot area
legend.key = element_blank (), # switch off the rectangle around symbols in the legend
legend.title = element_blank (), # switch off the legend title
legend.text = element_text (size = 15), #sets the attributes of the legend text
axis.title.x = element_text (vjust = -2, size = 20), #change the axis title
axis.title.y = element_text (vjust = -0.1, angle = 90, size = 20), #change the axis title
axis.text.x = element_text (size = 17, vjust = -0.25, colour = "black"), #change the axis label font attributes
axis.text.y = element_text (size = 17, hjust = 1, colour = "black"), #change the axis label font attributes
axis.ticks = element_line (colour = "black", size = 0.5), #sets the thickness and colour of axis ticks
axis.ticks.length = unit(-0.25 , "cm"), #setting a negative length plots inside, but background must be FALSE colour
axis.ticks.margin = unit(0.5, "cm") # the margin between the ticks and the text
)

ggsave (paste (save, filename), height = 12, width = 18, dpi = 75)