Saturday, April 22, 2006

CAA 2006 - Day 4 and Closing Banquet

Well, the CAA is now over, but the effects are still lingering. Although, I have a bunch of notes from some great papers on Day 4 (Friday), I will not get to them until tomorrow.


Honestly, the closing banquet and fun that followed has put me out of commission. Between the great conversation, great food, great entertainment, and most of all, great people, the CAA 2006 in Fargo ended with a bang! Too bad my head, after a days worth of bumpy air travel, is still reminding me of the fun.

I have a few stories and a bunch of pictures from last evening to share. As soon as I get my sense back, I will have them on the blog.


Photo (taken by myself) of a few of the great people
who stuck it out with me until the end.

Friday, April 21, 2006

CAA 2006 - Day 3 Addendum: ArchaeoML and Tagging



I couldn't quite get all this written last night, so here is a brief of the Day 3 afternoon session I attended...

XML and Tagging
The afternoon session on Database management was chaired by database rock star, Edward A. Fox. Ed Fox is gaining quite a fan club here in Fargo, well, there is at least two of us. If you ever need a real-deal session chair, he's the man!

Back to the session, Tyler Bell, Oxford ArchDigital opened his paper with why XML is like Sex: Everyone thinks they are great at it, Everyone is doing it, everyone is talking about it, but not everyone is as doing what they think they are. This informative talked, in a point counter-point format, followed with what it is to actually "use" XML and why to "use" XML. Although there were no specific applications, Tyler pointed to a number of possibilities and prudent uses for XML in cultural heritage.

Building upon Tyler's XML info, the session turned to a paper by Eric Kansa, delivered by David Schloen. Eric has developed the Open Context project as "an Internet-based archive that aims to preserve and promote our shared, global heritage." Open Context is, in short, a searchable and taggable archive of user provided cultural heritage content. The key to this project is the development of a folksonomy based on the tags provided by the community. Yes, this is the application of "web 2.0" ideas to cultural heritage management! The discussion that followed this paper was quit lively and full of great ideas.

Is a folksonomy the answer to creating a universal language within an discipline that has as many languages as practitioners? Some in the audience agreed that it is a fantastic way to organize data, while others like to view it from the point of adding value and data relationships where specialist have never seen them. The ideas of democratize data and building an archaeological thesaurus based on community tagging was explored. Of course, there are technical problems with relying on tagging as the main system of data description and cross-project mapping. Interestingly, David Schloen, who presented the Open Context paper on Eric's behalf, is involved with a similar project, OCHRE (Online Cultural Heritage Research Environment) at the University of Chicago.

OCHRE is a XML structured schema, called ArchaeoML, with a java web frontend. The general idea between Open Context and OCHRE seems similar, but the underlying data structure is quite different. As I understand it, OCHREs use of ArchaeoML creates a data structure and mapping capability that is much more robust than community tagging alone. The ArchaeoML structure of OCHRE also carries the capability for community tagging and relationship identification. So although, the OCHRE project has many more standards, it still has all the abilities to build a folksonomy based thesaurus and retains the research grade data structure in the XML schema. I hope I did not butcher those descriptions too bad.

Is the internet and/or archaeological community ready for multi-vocal community interpretation and Cultural Heritage with user added value? I think so. It seems that many others feel the same way. How structured should the common vocabulary be? The OCHRE and CIDOC CRM (Stephen Stead, Paveprime Ltd. UK) projects have gone a long way in creating a usable structure that is flexible enough to map varying data sources, but there was also a backing voice to the idea of letting the community develop the language via tagging. I hope that these discussions continue to expand the possibilities for not only how archaeologists interact, manage, archive, and research, but also how the non-specialist public can learn, interpret, and most interesting to me, teach us about the world's cultural heritage.

Thursday, April 20, 2006

CAA 2006 - Day 3: Archaeological predictive Models, XML, Tagging


Another enthralling day in Fargo, ND. There was an abundance of great papers today and plenty of thoughtful discussion to match.


With my chances for a free happy hour drink slipping away, I will just quickly cover some of the highlights.

Inductive vs. Deductive Predictive Models: Battle-Royal!!!
Will inductive models really send earth spiraling to hell in a handbasket? Or are deductive models the way to true enlightenment? Well, this ever present topic was brought up today in a great symposium on Archaeological Predictive Models.

After David Ebert, not to be confused with James Ebert, discussed the "7 deadly sins of inductive modeling", two authors followed with papers, based on opposed theories, that produced very compatible results. Scott Madry of the University of North Carolina, presented a 7 county wide predictive model based on an inductive correlative model. Following this, Thomas Whitley, of Brockington and Associates, presented a deductive based behavioral model for a 2600 square mile (did I record the right?) study area in South Carolina. For my money, this battle of the Carolina's was the pinnacle of the CAA so far. Okay, so no one was throwing chairs, but I sure did scribble some frantic notes.

Basically, the inductive model is criticized by distilling the vast diversity of the environment and archaeology to a series of correlations. Though, the end result is a quantitative, testable, and field verifiable model which fits very well with the requests of the Department of Transportation. Alternatively, the Deductive South Carolina model establishes both environmental and behavioral adaptations which are cross-correlated into a matrix of cost-benefit surfaces. These surfaces are composed into any number of formulas of settlement/subsistence & behavioral adaptations to make testable hypothesis to be modeled. The end result of this approach is a hypothesis testing framework of formulas. This is great for explanatory research, but not great of DOT review. In this case, Whitley combined all 46 testable formulas into a single surface which was encoded with a 1-10 rating of site "possibility"; I'm not really sure what the correct term for that metric would be.

All in all, it was a great demonstration of the application of both methodologies. I truly enjoyed both sides of the theoretical coin. Certainly in the future I will ramble on a bit more about the war waged between inductive and deductive models.

Okay... I would like to include a few words here about the papers on XML and DBs I saw today, but I happened to be out a little too late with a couple of good friends, so it will have to wait. Tomorrow (actually today!) will be the last full day of papers, so it should be exciting!

Wednesday, April 19, 2006

CAA 2006 - Day 2: 3D data capture, Agent Based Modeling, Mobile Applications


Althought blogging etiqutte seems to be a taboo topic, there must be some rule against blogging after a night of free wine and beer. The folks of Fargo have certainly rolled out the red carpet for this conference; it has been a great time so far.

Since the wine flowed like, well, wine... I will have to confine myself to a quick recap of today's events.

3D Data acquisition
3D data acquisition papers covered a few interesting topics. First, Mark Mudge and Carla Schroer at Cultural Heritage Imaging discussed their method of Reflection Transformation Imaging (RTI). This very cool 3D'ish data capture uses a fixed camera taking approximately 16 pictures with individual light sources set up at different coordinate locations. The output is a series of images that are synthesized into a single image where each 2D pixel encodes the 3D data in the form of illumination direction and lighting characteristics (normals). The RTI viewer displays the 2D image, but allows the user to control the location of the light source with the movement of the mouse. In essence, this displays the topography based on the surface normals. This technique works great for small and fragile objects. The team is working on extending the capabilities so that the RTI map can be applied to a passively collected 3D geometry and viewed with adjustable and accurate lighting with a very low file size. Also, the team had developed an open source, Java based viewer.

Secondly, there was a great 3D acquisition project from the folks at the Center for Advanced Spatial Technology (CAST) at the University of Arkansas. Briefly, this team used an optics long range, time of flight laser scanner to perform a High Density Survey of Machu Pichu, Peru and Tiwanaku, Bolivia. Basically, the research group conducted over 225 different scans which included over 150 million points in 13 days (non-consecutive). The resulting point clouds are downloadable, along with a free viewer from polyworks, at www.cast.uark.edu/invirmet
Check it out!

Agent Based Modeling
Agent Based Modeling (ABM) is a topic close to my heart and something I wish to talk about more. Today, I saw few papers on ABM, here is a quick overview of two different approaches. First, Luke Premo, from the University of Arizona, discussed an exploritory ABM for the Plio-Pleistocene of Africa. The intention of Premo's model is not to create a realistic rendition of real world condition (topography, climate, soils, geology). Instead, his intention is to create a simple model to test (falsify) a contemporary hypothesis. The hypothesis is the Central Place Foraging (CPF) model as applied to early Hominids in East Africa. Premo created a model, entitled SHARE (Simulated Hominid Altruism Research Environment), that embeds rules of food consumption and hominid movement based on a patchwork grassland/forest environment. In the end, Premo replicates the general patterns of artifact distribution (called patches and scatters) that are currently attributed to CPF based on the location of the forest patches. At first glance, it seems obvious that the model would produce a patchwork distribution that corresponds to the patchwork of the forest, therefore making dubious results. But this is primarily what Premo is getting at. In the past, this pattern has been attributed soley to CPF as a consequence of modern ethnographic analogy. His simple model shows that CFP is not the only method that can lead to such an artifact distribution. Not that CFP is falsified, but the SHARE model shows that other things may be at work and that modern ethnographic analogs are not the only correlation. Interesting!

On the other side of the coin, John Murphy, also of the University of Arizona, detailed the Dynamic Interface Architecture System (DIAS). This system is a framework for the development of integrative ABM models that share data inputs and actions. The DIAS framework can take models of different systems and link them as modules into a large and more conclusive model. For instance, Murphy discussed the ENKIMDU model (a cultural based model of Mesopotamia) and showed how models of climate, soils, and cow herding behavior, which were create for different reasons, could be plugged in using the DIAS system, to the ENKIMDU model. This framework gives the power to take specialized models, created by experts in that field, and plug them in, seamlessly, to your model. According to Premo, this model would be included in the Emulation class of ABMs.

Mobile Applications
And finally, Claus Dam of the Danish Heritage Agency, Denmark, displayed the cell phone based locative technology they have developed. In short, this system, Nordic Handscape, is broken into two aspects. One part of this project is a system for tourists to retrieve museum data, while the other is for professionals and the public to retrieve data on archaeological sites based on their GPS location or their geocoded address. I do not want to see these projects short, but given the time, I will say that they did everything I could have asked them to do. It knows you location, and it will push you the location, data, and interpretation of archaeological sites within a given radius. The part that I find most appealing about his project is that it has a individual interpretation and data collecting aspect that nears social networking. Each user has their own website with their preferences on which the user can upload photos of sites and tell their own stories about these sites. At the end of the talk, there was a very interesting discussion based on this social aspect. One audience member asked what will happen when we allow lay people to edit out "professional" interpretations of archaeological sites? Other audience members jumped to the defense of the democratization of data and welcomed the wiki'esqu nature of the system. I agree with the wiki folks and would enjoy reading interpretations derived from non-archaeologists. The topic of democratized data has been an undertone.

I would love to delve deeper into the topic of what social networks and massively distributed archaeological interpretations could do for our discipline, but that is another day.

Way past my bedtime... Talk to you all tomorrow.

CAA 2006 - Day2: Morning Poster Session


I had some time this morning to walk around and check out the posters. Here is a quick recap of two interesting topics:

Google Earth, VRML, Native American Pit Houses
Secondly, A University of British Columbia team, composed of Michael Blake, Sue Formosa, Dana Lepofsky, and Dave Schaepe created a poster for a very cool and effective project along the Fraser River in BC. Working with the local native community, the Coast Salish, the UBC team used Surfer 8.0, Global Mapper, Google Earth, and a Cortona VRML viewer to efficiently and inexpensively disseminate archaeological data gathered from a Crast Salish pithouse village, to the interested parties.

The base data for this project is a high density laser transit survey of the site. Brought into Surfer and turned into a DEM, the depressions in the landscape that show the former location of Native American pit houses became very evident. From, Surfer, the DEM is exported to Global Mapper and saved as VRML for web viewing. Alternitvely, the data was also exported as a .jpg and imported into Google Earth, rectified, and exported as a KML. The end result is a highly detailed and realistic depiction of the pit house site which can be geogrpahically explored. This implementaion of geospatial technology is just the types of projects wich we should see more of in the neart future. The technology is cheap, if not free, the technical overhead is low, and the results are easily interpretable in a non-archaeoligcal context and accesible to anyone with a computer. (I realize the last requirement excludes 85% of the worlds population, but hopefully that will change one day.) The UBC team said they are working on a public site to share thier info.


Iowa Lithic Database
First, is a onteresting project run by the Office of State Archaeologist at the University of Iowa. They have created an electronic database of thier inhouse lithic samples which can be searched to help located the geologic source of your lithic artifacts. In the words of the authors:
"This assemblage is based on macroscopic identification elements including geological references, physical samples, mapped source locations, and a visual basic script program, all combined to form a GIS based system for comprehensive state-wide lithic identification and analysis."
Check out thier site for more info and program download...

There should be some great papers presented today. I'll provide an update this evening.

Tuesday, April 18, 2006

CAA 2006 Day 1: Digital Earth and Fladers Archaeology


The first day of the CAA 2006 conference has concluded. Before I head down to the hotel bar to... uh... “network”, I would like to share a bit about two interesting papers I saw today.

The first paper is a topic that was briefly covered in a previous post concerning the Tijl Vereenooghe’s Google maps based Flanders Archaeology Project "OpGraven". After reviewing this project for the audience, Tijl unveiled a new project he has begun. Erfgoed In Vlaanderenis a Flickrmap (Flickr photo database tied to a Flash map) based project mapping and providing photos of the standing historic structures of the Flanders region. Please note that Tijl is not ready to release this site just yet, so improvements will be made. By using Flickrmap, erf-goed is able to handle a larger volume of data points as compared to Googlemaps and has the social network aspect of Flickr that will allow users to include their own photos of the mapped structures. After discussing these two projects, Tijl noted that he has had to spend very little time completing these projects with a total cost of $5. That raised the eyebrows of those not familiar with the technologies. The presentation can be found here.

The second paper of interest, centering upon the Digital Earth concept, was presented by Karl Grossner of the University of California, Sante Barbara. As a student of Michael Goodchild, Grossner’s work is centered on the creation of a true “digital earth system” base on the “Geolibrary” concept. The geolibrary concept, as evolved by Goodchild, is a georeferenced, searchable, index-able, library that is served through an interface that has the ability to open and process these data with GIS tools. Gossner uses this concept and builds upon it, by defining a “digital earth system” as geolibrary that interfaces with a virtual globe model with GIS tools to create, ultimately, a massively distributed GIS. Grossner’s paper spent time differentiating the digital earth system from today’s virtual globes (Google Earth, World Wind, etc...) Whereas contemporary virtual globes are not technically GISs and are primarily concerned with information with a location, the digital earth system will be more geared towards providing knowledge about places through distributed GIS databases, Knowledge Organization Systems (authority lists, domain ontologies, review & editing capabilities), UIs, querying, and clearing houses.

Certainly this is a lofty goal, but perhaps an idea that just needs it’s time. With developments in GE, World Wind, and the new capabilities of ArcExplorer, hopefully the buildings blocks of technology will find their place in Grossner’s schema. The technology will develop in that direction, but it will require a user movement to lead to a massively distributed GIS and a few good brains to keep the course steady. As stated by James Boxall [PDF] (2002; 12) “The real issue, in relation to the development of digital earth, is where the librarians will come from in order to help shape the geolibrary component of DE [Digital Earth].”

My synopsis here is limited, so luckily, Grossner revealed that this topic will be soon published as a journal article, but I will have to track him down to find out which.

Each of these papers was presented at symposium devoted to the Electronic Cultural Atlas Initiative (ECAI). The ECAI “uses time and space to enhance understating and preservation of human culture.” Project such as TimeMap and the Silk Road Project are derived from the ECAI. Check them out...

Tomorrow’s agenda includes modeling pathways, 3D data capture, simulation, DBs, and GIS applications.

Monday, April 17, 2006

Computer Applications and Quantitative Methods in Archaeology: CAA 2006

Here I an in Fargo, North Dakota at the Computer Applications and Quantitative Methods in Archaeology: CAA 2006 conference. Starting tomorrow, there will be 5 days of great presentations and workshops dealing with topics such as, but not limited to, 3D technologies, remote sensing, VR, and database theory as applied to archaeology and cultural heritage. I will try to cover some of the interesting topics I hear during the next few days. Hopefully this hotel internet connection will hold out long enough. Unfortunately, I am unable to upload any photos via FTP or HTTP, but I will do my best to let you know what is going on.

Stay tuned for updates!