Author Climbing in the Queyras, Summer 2014

Monday, May 02, 2016

In Maudslay's Shadow: 
Introduction to the 3D Imaging of Pre-Columbian 
Mesoamerican Artifacts

 Philosophy is the theory of multiplicities, each of which is composed of actual and virtual elements. Purely actual objects do not exist. Every actual is surrounded by a cloud of virtual images.
                                                                                                 --G. Deleuze, The Actual and the Virtual

The Library of Congress’ Geography and Map Division is home to a large collection of Pre-Columbian archaeological artifacts donated by the collector Jay I. Kislak, many of which are on display as part of the Exploring the Early Americas exhibit in the Thomas Jefferson Building here in Washington, DC. The artifacts that make up the collection range in dates from the Olmec culture around 1000 BCE, to the classic period Maya (300-900 CE) and Aztec civilizations, including many objects that date from the period just before contact with the Spanish in the late 15th century.

As the Curator of the Jay I. Kislak Collection, I am always looking for new and innovative ways to make this group of archaeological artifacts more accessible to scholars and educators around the world, who cannot, for whatever reason, make the trip to Washington, DC. Those that can make the trip are always welcome to use the Kislak Study Collection, located in the Geography and Map Division, where the artifacts not currently on display in the gallery are stored.

One way that I am attempting to make the collection more available is through the use of three-dimensional imaging. In the case of material artifacts, two-dimensional images, while helpful, do not allow for the complete examination of an object and, moreover, can often distort its dimensionality and structure. In order to make proper attributions and comparisons with similar objects in other collections, it is critical for scholars who cannot examine an artifact in person to have realistic views “in the round” of what they are studying. To this end we have embarked on a series of experiments here at the Library of Congress that uses three-dimensional structure from motion imaging to reconstruct scaled and true to life models of the artifacts in our collection.

Structure from motion imaging is a complex technique that allows for the extraction of three-dimensional information not only from single objects, but also from the architectural features of buildings and ruins, or from landscapes, all derived from a series of two-dimensional images. The technique was developed for computer and robot vision and is the digital equivalent of the task that the brain and eye perform as we humans move through a three-dimensional world using two-dimensional projections[1]


Figure 1: Hollow Kneeling Male Figure, West Mexico, Jalisco, Terminal Pre-Classic Period, 200 BCE- 300 CE. Kislak Collection 0012. Geography and Map Division, Library of Congress.

I have called this short review of the techniques and applications of this type of imaging to archaeological objects, “In Maudslay’s Shadow,” in order to recall his use of the most up to date technology of photography in the late-19th century.  Alfred Maudslay (1850-1931), over the course of several decades and endless difficult journeys into the jungle took images of many of the most important Maya archaeological sites and inscriptions recently being unearthed in Central America[2]. Besides his photographs, which were critical to the decipherment of the then unreadable Maya script, he also made many three-dimensional casts of stela[3]. Maudslay’s images and models ushered in an amazing time of discovery in early American archaeology and revolutionized both the practice of both field and museum imaging[4]. For this reason, those of us trying to use new imaging technologies today stand in Maudslay’s long shadow.



Figure 2: Maudslay’s Photograph of Stela H, Copan, 1885,
Geography and Map Division, Library of Congress

Techniques and Examples

In principle the calculations required to do structure from motion imaging (SfM) are algorithmically complicated, and are related to photogrammetric techniques involved in sorting out the difficult geometry of remote sensing images of the earth and other planetary bodies taken from satellites.
To make a three-dimensional model of an archaeological object using SfM, a group of two-dimensional images are taken from a variety of vantage points and are processed through a pipeline of computer programs that create a three-dimensional dense point-cloud representing all the various surfaces that make up an artifact.



 Figure 3: To begin the process of SfM imaging a series of photographs are taken with a high-resolution digital camera from various vantage points. This process is done in order to get a complete set 2D images of the object from which point-correspondences are calculated and used to reconstruct the 3D object



The first step in any attempt to make three dimensional models from two dimensional photographs is the acquisition of the digital images themselves. As in all things computational the better the input data the better the resulting model. For Structure from Motion imaging the digital inputs must be taken by either rotating or walking around the object. Most of the images should be overlapping, and from a variety of angles, in order to ensure that the resulting point clouds cover the entire surface of the object that is being modeled. The images will be used to determine various key points, features and point correspondences used to generate the first sparse point cloud model of the object and are critical to all the calculations that follow.

There are many algorithms currently available that can calculate point correspondences which go by the generic name keypoint detectors. Perhaps the best, and the one used here is called SIFT (Scale Invariant Feature Transform) and was written and developed by David Lowe. SIFT is typically used for object recognition in computer vision and has many features important when trying to accomplish 3D reconstruction. The keypoints that are selected and matched across multiple images are invariant to scaling and other kinds of transformations like rotations allowing the algorithms to be used for uncalibrated camera images[5].  


 

Figure 4: Camera Positions and Key points output from VisualSfM

 


Figure 5: Output of calculation yielding camera position and initial 3D reconstruction for the Kislak Olmec Figurine


 
Figure 6: Image Matching Matrix

The exact matching of points across the series of digital images can be represented in a feature matching matrix that gives a visual sense of the connection between images. Typically SIFT will detect tens of thousands of features for even low resolution images and hundreds of thousands of stable points for a 10-15 megapixel image.

The actual process of making three-dimensional models using structure from motion consists of two parts. During the first, the computer examines the two-dimensional photographs and finds matching points in multiple images. The points are then used to calculate the actual position in space where each of the images was taken. Once the positions of each of images are known, the location of the points from all the images are plotted in space, yielding a dense reconstruction of the shape of the object that was photographed. The result is a point-cloud that is very similar to the kind of data one would extract from a laser scan of an object.

 In the case of the models being made here at the Library of Congress several different computer programs are being experimented with including VisualSfm developed by Changchang Wu while in the Department of Computer Science at University of North Carolina at Chapel Hill and who is currently at Google[6]. His program, from which the point cloud images shown above where reconstructed uses SIFT combined with a graphic processing unit (GPU) developed by Sudipta N. Sinha and others[7]

Reconstruction can be accomplished using as few as two images but multiple images yield much better results even though computationally more complex. Using multiple images one faces what is known as the structure and motion problem. Put succinctly the problem says:

 

In the case we are working with here the intrinsic parameters are unknown as the sequence of images we are working from is un-calibrated. This reduces to a problem in projective and epipolar geometry regarding the position of the camera when each of the images is taken and calculation of what is called the fundamental matrix of the camera, which relates those positions to the external geometry of the image.  The exact mathematical details of this are beyond this short review and the reader is referred to the excellent survey by Olivier Faugeras and Qunag-Tuan Luong[8].

Once the corresponding features have been identified across a large series of images the movement of the camera around the object is used in combination with its focal length to precisely reconstruct the original camera positions. This results in the kind of sparse point cloud shown in figures 5 and 6. Although this series of points gives a good impression of the objects 3d geometry it is insufficient for a metrically accurate and realistic reconstruction. 

Currently, there are many different approaches for generating what is known as a dense point cloud, which results in a more accurate 3D representation. As is obvious from its name, a ‘dense’ cloud is computationally more involved than the initial sparse cloud models.  One way to overcome this difficulty is to divide the task into smaller parts using Patch-based Multi-View Stereo (PMVS) algorithms. These algorithms are very efficient as they take the output from a structure from motion program like VisualSfM and decompose it into a set of clusters of manageable size.

The basic sequence of calculations performed by the programs being used here are as follows:
  
  1.   Extraction of key points and linking features from a group of 2D images
  2.      Image matching and calculation of the camera position at the time each of the images where taken
  3.      Sparse model reconstruction
  4.      Dense model reconstruction
  5.    Surface meshing and error compensation (bundle adjustment)


The last step of bundle adjustment is always the final problem to be overcome in any 3D reconstruction project. Bundle adjustment is an optimization procedure that attempts to reduce the noise associated with the various errors introduced in the various projection calculations. This noise is important because keypoint matching algorithms, like SIFT, may introduce errors between the image locations of observed and predicted points in the reconstruction.

Even though dense point clouds can give an aesthetically and visually pleasing impression of an actual 3D object, the calculated model will dissolve into its individual points at some scale when magnified. The 3D point cloud and model need to be further processed to make an interactive scaled and true-to-form model of the object by overlaying a polygonal mesh representing mathematically the artifact’s form. Polygonal meshes come in many shapes but are most commonly triangular[9], and depend on the smoothness of the surface being imaged[10].



The polygonal mesh reconstructs the surface of the object which approximates, sometimes to extreme accuracy, the shape and features of the original, continuous surface.  The newly emerging field of discrete differential geometry is allowing for faster computation of these meshes which can be quite large[11]. Current methods of surface reconstruction can be roughly divided into two different classes. The first, so-called sculpting methods, start with the convex hull of the entire point cloud, and proceed to remove pieces until the actual surface of the object has been reached.  The second method, termed region growing, starts with a minimal triangulation and keeps adding newer and denser triangles to the model until the desired level of realism is reached.

Many algorithms have been created to accomplish this task from simple Delaunay triangulation to Poisson Reconstruction[12], Marching Cubes[13] and Power Crust[14].

At the Library of Congress we are currently experimenting with a combination of programs developed by AutoCAD, such as 123D Catch and Meshmixer, to produce the mesh models shown in this paper[15].


Figure 7: Triangular Mesh Rendering of a three-dimensional model of the Kneeling Male Figure in Figure 1 

Besides the density of the triangular mesh, additional features are used to visualize the smoothness and texture of the surface. Techniques like specular shading, the use of lines of reflection, and what are called isophytes, or lines of constant illumination across the surface, help accentuate the three-dimensional data derived from the two-dimensional photographs. In some cases additional algorithms might be used to smooth the surface and de-noise the photographic data in order to fill holes or blend the surface curvature to improve the visualization of the artifact.
In technical terms these meshes are actually non-directed large graphs with many vertices and faces[16]. The model above for example contains more than 400,000 nodes at medium resolution and is a truly complex and discrete mathematical object.  The underlying mathematics of this re-construction relies on the geometry of digital spaces which has been developed, over the last decade or two, for the creation of realistic virtual and augmented reality experiences and for computer gaming applications[17].


Figure 8: Photo cluster of images surrounding a Seated Male Figure from the Olmec Middle Pre-Classic Period, 1100-500 BCE. This shows the locations at which each of the photos was taken relative to the object. Jay I. Kislak Collection, Geography and Map Division, Library of Congress.

The main difficulty associated with structure from motion imaging centers on solving a problem in projective and epipolar geometry. The solution relates all of the images taken of a particular object to each other by using common points and reference lines. This so-called, geometry of multiple images problem” is an active area of research in computer vision and is being applied increasingly to archaeological contexts[18]. Constructing these geometries allows for the scaling and reconstruction of models that can be measured and compared to other like archaeological artifacts.

Figure 9: Scaling and Measuring the Kneeling Figure shown above.
Geography and Map Division, Library of Congress.
Here in the Geography and Map Division we are just beginning our experiments using this technique with the hope that soon we shall be able to make three-dimensional, dynamic, and interactive models of the Kislak Collection available to scholars around the world who are interested in applying this exciting new technology to their research.


 
Figure 10: Three-Dimensional Model of Kislak Olmec Figurine 0155.
Jay I. Kislak Collection, Geography and Map Division, Library of Congress.

 

Figure 11: Seated Olmec Figurine, Kislak Collection 0155, from the Middle Pre-Classic, 1000-500 BCE.
Jay I. Kislak Collection, Geography and Map Division, Library of Congress.


[1] For more on the process mathematics of structure from motion imaging and its relationship to the quickly evolving fields of computer and robotic vision see the works of Richard Hartley and Andrew Zimmerman, Multiple View Geometry in Computer Vision, (Cambridge, UK: Cambridge University Press, 2003).
[2] Alfred Maudslay, Biologia Centrali-Americana. Archaeology. 4 volumes and 16 fasicules of photographs (London: R.H.Porter, 1889-1902).
[3] Thomas Athol Joyce and Alfred Maudsaly, Guide to the Maudslay Collection of Maya Sculptures (casts and originals) from Central America, (London: British Museum, 1925).
[4] Ian Graham, Alfred Maudslay and the Maya: A Biography, (Norman: University of Oklahoma Press, 2002).
[5] David G. Lowe, “Distinctive Image Features from Scale-Invariant Points,” International Journal of Computer Vision, 60, 2 (2004) 91-110. http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf 
[6] VisualSfMis available as an open source program at http://www.cs.unc.edu/~ccwu/siftgpu/ and features both the integration of SIFT and a multicore bundle adjustment program http://www.cs.unc.edu/~ccwu/siftgpu/ .
[7] Sudipta N. Sinha, Jan-Michael Frahm, Marc Pollefeys and Yakup Genc, “Feature Tracking and Matching in Video Using Programmable Graphics Hardware,” Machine Vision and Learning Applications, November 2007 and “GPU Based Video Feature Tracking and Matching, Technical Report 06-012, Department of Computer Science, UNC Chapel Hill, May 2006. http://cs.unc.edu/~ssinha/pubs/Sinha06TechReport.pdf

[8] Olivier Faugeras and Qunag-Tuan Luong, The Geometry of Multiple Images, (Cambridge, MA: MIT Press, 2001)
[9] For more on mesh generation and geometric modeling see Mario Botsch, Geometric Modeling Based on Triangle Meshes. http://lgg.epfl.ch/publications/2006/botsch_2006_GMT_sg.pdf
[10] There are many different kinds of both isotroptic and anisotropic meshes being used in geometric modeling and computations, and many varieties of data structures used to keep track of them. For more on this see Mario Botsch, Plygon Mesh Processing (Boca Raton: CRC Press, 2010)  and the information found at
[11] Alexander I. Bobenko, Discrete Differential Geometry (Providence: American Mathematical Society, 2008)
[12] Michael Kazhdan, Matthew Bolitho and Hughes Hoppe, “Poisson Surface Reconstruction,” Proceedings of the Eurographics Symposium on Geometry Processing, 2006
[13] William E. Lorensen and Harvey E. Cline, ”Marching Cubes: a high-resolution 3d surface reconstruction algorithm,” Computer Graphics 21 (1987)
[14] Nina Amenta Sunghee, “The Power Crust ,unions of balls and medial axis transform,” Computational Geometry: Theory and Applications 19 (2000) 127-153
[15] The AutoCad group of 3D modeling programs can be found at http://www.123dapp.com/meshmixer.
[16] Bojan Mohar and Carstem Thomassen, Graphs on Surfaces, (Baltimore: Johns Hopkins University Press, 2001)
[17] Gabor T. Herman, Geometry of Digital Spaces, (Boston: Birkhauser, 1998)
[18] See Susie Green, Andrew Bevan and Michael Shapland, “A Comparative assessment of structure from motion methods for archaeological research, Journal of Archaeological Science 46 (2014) 173-181; Fabio Bruno, et.al, “From 3D reconstruction to virtual reality: A complete methodology for digital archaeological exhibition,” Journal of Cultural Heritage 11 (2010) 42-49; Benjamin Ducke, David Score and Joseph Reeves, “Multiview 3D reconstruction of the archaeological site at Weymouth from image series”, Computers and Graphics 35 (2011) 375-382.

Sunday, November 15, 2015

MAP: Exploring the World




I wrote the introduction and I am the consulting editor for a new book from Phaidon that juxtaposes maps, both traditional and digital, from many cultures and time periods......the book is available through bookstores and of course on Amazon. Click to hear an interview about the book with ABC Radio and NPR's All Things Considered.


We are currently in a golden age of cartography where more and more the line between data visualization and mapping is becoming blurred with our ability to display and analyse large amounts of spatial data, mostly from sources not traditionally available to cartographers, like cell phones and social media. This new book features maps from the beginning of cartographic thought all the way to what I have turned in the introduction "cartography's final frontier," that is the mapping of the human brain.



.....see reviews of the book and interviews about its cartographic methodology and philosophy at the Atlantic Monthly's CITYLAB, Wired Magazine, the BBC, Forbes Magazine and Travel + Leisure Magazine.




Thursday, November 05, 2015

Disappearing Acts:
the Life and Death of a Great Alpine Glacier

Using Nineteenth Century Sources to Study the melting of the Glacier Blanc

.....

..... field geographers can speak with authority about the clarifying effects on the mind of direct physical danger in the real world and there exists a terrible antagonism between field geographers and armchair academics. Not only do those in their armchairs think and write junk, obfuscation, obscurantism, and endlessly convoluted self-referral to their literature in windowless libraries, they do not care about the human condition.”

--William Bunge,
Geography is a Field Subject
Area, 1983

This paper is a shorter version of talk given in 2010 and that will be published later this year.
See the session Measuring Environmental Impact at the Institute of Historical Research, University of London School for Advanced Studies, and my paper Disappearing Worlds.

The Glacier Blanc is the largest glacier in the Southern Alps. From the Dome des Ecrins it reaches a height of 4014 meters down to its snout at 2315 meters, the glacier covers an area of 5.34  square km, extends 5.9 km in length and has a mean slope of approximately 30%.  Measurements have been carried out on the Glacier Blanc since the late nineteenth century with the first real quantitative study in 1887.
Glacier Blanc from the Dome de Neige des Ecrins (click on image to enlarge)

Click on Images to Enlarge
Several important studies have summarized the historical variation of the glacier's mass balance, length and thickness. (see Thibert, E., J. Faure and C. Vincent. 2005. "Bilans de masse du Glacier Blanc entre 1952, 1981 et 2002 obtenus par mode`les nume´riques de terrain". Houille Blanche 2, 72–78.)

Approaching the Glacier Blanc from the Village of Alfoid  (click on image to enlarge)

The approach to the Refuge des Ecrins (click on image to enlarge)
In the center section the main stream the Glacier Blanc is about 800 to 1000 metres wide. The greatest depth of ice occurs near the Refuge des Écrins where it is up to 250 metres deep; some 30 metres less than it was in 1985. My visit  to the glacier this summer showed that the glacier had shrunk back significantly from its previous position leaving only exposed rock near the lower Refuge du Glacier Blanc.

The map below, last updated in in 1991, shows the position of the glacier relative to the refuge, and portrays the snout many tens of meters further down the valley than its present location. The glacier flows at a speed of around 40 meters per year in its central section (in the mid-1980s it moved at 50 m/yr) and at about 30 metres per year near its snout. Its reaction time, i.e. the time that elapses before the foot of the glacier advances or retreats due to major changes in conditions in the accumulation zone, is about 6 years. So the melting we are seeing now is a window into the recent past. 


Map of the area around the Glacier Blanc showing the Refuges Ecrins, Blanc, and Cezanne along with the Glacier Blanc's smaller and rock covered partner, the Glacier Noir  (click on map to enlarge)
Glacier Blanc in the Summer of 2012 (click on image to enlarge)

Schematic of the Structure of the Glacier Blanc (click on image to enlarge)
As with almost all alpine glaciers, the foot of the Glacier Blanc has retreated significantly, as should be evident from the graph of its length below. In earlier times, most recently in 1866, it formed a single glacial system with its southern neighbour, the moraine-covered Glacier Noir, whose streams joined one another above the Pré de Madame Carle. The Glacier Noir is much different in morphology than the Glacier Blanc and is covered with rockfall and boulders from its lateral moraines.
Glacier Noir  (click on image to enlarge)
During the Small Ice Age the combined ice system reached its maximum extent in 1815 and ended roughly at the height of the Cezanne Hut (1,874 m), near the village of Alfoid. Today looking at the Refuge de Cezanne and the Pre Madame Carle it is difficult to believe that the two glaciers ever extended that far down the valley.


Map of Historic Extent of the Noir and Blanc

Refuge de Cezanne near the historic confluence of the Glaciers Blanc and Noir (click on image to enlarge)
There are many sources for glacial heating and the thermodynamics of glaciers is quite complex requiring the solution of several different complex differential equations.

For more of the solutions to theses equations and modeiling of glacier energy balances see the notes Thermodynamics of Glaciers from the McCarthy Summer School at the University of Alaska. Characterizing the heat sources is further complicated by the difficulties in making field measurements for some areas on mountain glaciers that have complex or irregular geometries.


As of 2010 the tongue of the Glacier Blanc lies at a height of about 2,400 m. In the 20th century it is estimated that it retreated by about 1 km, accompanied by a reduction in area of some 2 km². Between 1989 and 1999 alone the glacier lost about 210 metres; it retreated a further 300 metres in the years to 2006. The ice thickness in the centre reduced during the period from 1981 to 2002 by 13.5 metres, an estimated loss in volume of 70 million m³ of ice.


Seracs on the Glacier Blanc in 2012  (click on image to enlarge)
Crucial to the survival of a glacier is its mass balance, the difference between accumulation and ablation (melting and sublimation). Climate change may cause variations in both temperature and snowfall, causing changes in mass balance. Changes in mass balance control a glacier's long term behavior and is the most sensitive climate indicator on a glacier.From 1980-2008 the mean cumulative mass loss of glaciers reporting mass balance to the World Glacier Monitoring Service is -12 m. This includes 19 consecutive years of negative mass balances.

A glacier with a sustained negative balance is out of equilibrium and will retreat, while one with a sustained positive balance is out of equilibrium and will advance. Glacier retreat results in the loss of the low elevation region of the glacier. Since higher elevations are cooler than lower ones, the disappearance of the lowest portion of the glacier reduces overall ablation, thereby increasing mass balance and potentially reestablishing equilibrium. However, if the mass balance of a significant portion of the accumulation zone of the glacier is negative, it is in disequilibrium with the local climate. Such a glacier will melt away with a continuation of this local climate.The key symptom of a glacier in disequilibrium is thinning along the entire length of the glacier.bare, melting and has thinned.

In the case of positive mass balance, the glacier will continue to advance expanding its low elevation area, resulting in more melting. If this still does not create an equilibrium balance the glacier will continue to advance. If a glacier is near a large body of water, especially an ocean, the glacier may advance until iceberg calving losses bring about equilibrium.
Melt Zone outlet in the summer 2012  (click on image to enlarge)
For the first time since 2001 the mass balance of the Glacier Blanc became positive. It has gained 21 cm (water equivalent) over the last few years. However the snout remains very thin and is vulnerable to another hot summer, such as the kind we are experiencing here in the northeastern United States this year.

The last few year’s positive figures have not really effected the glaciers disappearance, as the snout of the glacier is still retreating. 

For more on the Glacier Blanc see glaciologist Mauri Pelto's excellent analysis on his website From a Glacier's Perspective,

Author taking a rest at the Refuge du Glacier Blanc  (click on image to enlarge)
Many historical sources that could help us in our efforts to understand the melting of the great Alpine Glaciers remain locked up in small and obscure local journals and travelers accounts. Many of these where published in the things like the annual of the Alpine Club of France, or in traveler's account like James Forbes' Journals of Excursions in the High Alps of the Dauphine. Forbes, pictured below, was one of the first scientist/explorers of the Alps to understand the principles of glacier mechanics and it is through his Travels through the Alps of Savoy that we can get a sense of how difficult to do glacier science was in the mid-nineteenth century.



There are many more obscure sources however like the measurements of Prince Roland Bonaparte who cataloged the sizes of many of the glaciers of the Dauphine in the 1880s and 90s.


Bonaparte took many photographs (click on images to enlarge) of his Alpine travels and a comparison of the landscape that he encountered with what is currently ice covered is quite shocking. One of Bonaparte's publications, Le glacier de l'Aletsch focuses on his journey across the glacier in 1888-89.


The Aletsch Glacier or Great Aletsch Glacier is the largest glacier in the Alps. It has a length of about 23 km (14 mi) and covers more than 120 square kilometres (46 sq mi) in the eastern Bernese Alps in the Swiss canton of Valais. The Aletsch Glacier is composed by three smaller glaciers converging at Concordia, where its thickness is estimated to be near 1 km. It then continues towards the Rhone valley before giving birth to the Massa River.

The Aletsch, because of its size is one of fastest shrinking glaciers in the alpine chain as it apparent from the three images below taken in 1979, 1991 and 2002. (click on images to enlarge)


The Aletsch Glacier has been studied for almost 200 years. This data has been compiled by the Swiss Glacier monitoring network and is shown graphically below.


The entire area around the glacier has been declared a UNESCO world heritage site. The United States Geological Survey has begun a Repeated Photographs Study that seeks to show in dramatic form the extent of glacial melting using historic photos. For example in Glacier National Park they have looked closely at the Grinnell Glacier from various vantage points.


Most alpine glaciers are in trouble and some have become dangerous as their melting has caused the formation of large glacial lakes in places where few had been before. About a decade ago a second lake appeared in front of the Arsine glacier just across the Barre des Ecrins from the Glacier Blanc. I visited this series of glaciers several year ago just after the snow melt, crossing the large moraines that are left behind from its larger bygone days.

The author approaching the calving front of the Arsine Glacier (click on image to enlarge)



Arsine Glacier as mapped in 1979 and in 2008 (click on images to enlarge) Note the creation of a second lake due to the melting glacier



To give the viewer an idea of the scale of these glaciers, note the author in the center of the photograph

Glacial melting not only affects the activities of climbers and geographers but also the daily lives of those who live in villages near mountain environments and those make their livings from them. One of the best recent studies glacial melting from this perspective can be found in the book In the Shadow of Melting Glaciers: Climate Change and Andean Society by Mark Carey. Carey's book has been the subject of much discussion and was the subject of an H-Environment Round table Review in 2011.


For more on the melting of glaciers worldwide and their mapping go to the resources available at Glacier Works and at the Extreme Ice Survey .



For more on glaciology and the effect of climate change on glaciers see:

H. Holzhauser, "Glacier Fluctuations in the western Swiss and French Alps in the 16th Century," Climate Change 43 (1999) : 223-37.

H. Holzhauser, "Glacier and glacial-lake variations in west-central Europe over the last 3500 years," The Holocene 15 (2005): 791-803.

Roger Hooke, Principles of Glacier Mechanics (Prentice Hall, 1998)

A. Nesje and S.O. Dahl, Glaciers and Environmental Change (London: Arnold, 2000)

Ben Orlove, Ellen Wiegandt and Brian Luckmann, Darkening Peaks: Glacier Retreat, Science and Society, (University of California Press, 2008)

W.S.B. Patterson, The Physics of Glaciers (Butterworth-Heinemann, 2001)

Daniel Steiner, "Two Alpine Glaciers over the Past Two Centuries: a scientific view based on pictorial sources," in Darkening Peaks (2008): 83-99