Day 8: Dirty Laundry and the Potential for Public Art History

Is public art history possible? What does it look like? How can we convey the political nature of art better to people (anyone)? How do we overcome certain perceptions people have about art history? How do we translate our personal interests and disciplinary ideas and trends for a broader audience? Do we need to do this at all?

These were a few of the challenging questions we discussed today, and ones that are critically important to address. None of them are easy to answer. Some of them make me uncomfortable, if only because they force me to recognize and accept some of my own biases, problems, and issues.

I’ve continued to think about the conversations of today, and imagine I will continue to do so long after #doingdah14 ends. What is public art history? And how can I do it? I am deeply committed to the idea of engaging with relevant publics (thanks, Nancy) outside of academia. I am intrigued by the notion of nerd sourcing/crowd sourcing/community sourcing, even if I am still unnerved by it to some degree. For those of us who like to be in complete control–and let’s face it, that’s most of us art historians–this might be a challenge (the ultimate challenge perhaps?).

This brings me to another point of discussion today: who are the relevant publics of our work? How do we engage them? And if we are not engaging with them, what tools/activities/structures/collaborations/etc. do we need to do so? In my group, an interesting idea was raised by Nancy Micklewright who noted that the Freer and Sackler Galleries have been developing programming around IPOP, or the idea that people fall into 1 of 4 categories of engagement: ideas, peoples (stories), objects, and physical. This resonated with me not only because I found this idea interesting for the museum space(s), but also because I see this as a useful way to think about my project(s) engaging with relevant publics.

Over the past week and a half I’ve circled around these ideas, occasionally attacking them head on. How do I engage with people about the arts of death and dying in Mexico? Why should I create this project? What’s in it for me, but more importantly what’s in it for anyone else? Is the project worthwhile? Where can I improve it? I keep wondering if perhaps it is too narrowly focused. After all, I want to write a book about this material, but that doesn’t mean the digital project has to follow that idea exactly. Perhaps it would be better as a project about the arts of death and dying more broadly. Or about the arts of death and dying between a narrow time span (e.g., 1600-1800).

I’ve also thought a lot (more than I care to admit) about whether I should create a project about the Sacred Heart. As the subject of my forthcoming book, I’m ready move on to another topic. Really ready. However, I think there are still some unanswered questions I have about the material that digital art history can help answer. Moreover, I think some of the tools allow me to make connections I was confident existed, but could never “prove” they did; some tools we’ve learned this week have helped me to do so. Lastly, with the Sacred Heart I can really see the collaborative nature of the project develop in interesting ways. Imagine in an ideal world if other art historians, historians, religious experts, pop culture consumers, and others across the globe all participated in a project on the Sacred Heart as cult, devotion, object, image, and icon. I’m kind of loving the idea, even though I am loathe to admit it.

I’m going to return to this issue of public art history in a future post. I’m looking forward to discussing the issue with colleagues, friends, and family back home.

Source: Day 8: Dirty Laundry and the Potential for Public Art History

Day 7: The Power of Visualization


I am indebted to Spencer for teaching us his formulas for some of our data. I am not the most skilled person with Excel. For my work on the Sacred Heart, I’ve always wanted to make a chart like the above of the percentages of Sacred Heart texts published in different countries in the eighteenth century.  I probably should have Googled it and learned earlier, but I didn’t and know I’m not sure why.

And look at how beautifully it displays my data (disclaimer: it is not all input; this represents only 71 texts out of hundreds–stay tuned). In a simple, straightforward manner, it conveys that Mexico published many texts on the Sacred Heart between 1700–1850. I was only able to develop such a nice pie chart after inputing my data into a spreadsheet. This also required me to tidy my data; it turns out it was really messy!


Our discussions over the past several days have impressed upon me the necessity of tidy data. While I always organize my research (my data), I realize that this doesn’t necessarily translate to tidiness. Creating a few excel spreadsheets with this in mind, I was able to make the pie chart above as well as input my data into a few other programs that visualized it in different ways. I also input some of it into Timeline JS. Rather than input individual items one-by-one–and with nothing to show but the final product–the detailed, tidy spreadsheets I’ve been working on this week transfer between many programs and platforms. I see this transferability as an essential ingredient for any project, particularly at the early stages of it when you might explore and tinker.

I’m also going to experiment further with Palladio. It offered interesting possibilities for data visualization that could be useful for my project.

Source: Day 7: The Power of Visualization

Day 6: Mining Data

I had high hopes for the applicability of data mining to my current/future project and my long-term research on the Sacred Heart. I’ll largely discuss my research on the Sacred Heart because I’m familiar with the material, having worked with it/on it for the past decade. I thought it would be useful to have a “safety” to see how well these data mining tools work. Verdict: so far, I’ve not been impressed with Google N-grams or Bookworm or Voyant or Open Calais. I hesitated to write this, if only because I imagine some of my cohort found at least one of these programs useful. Or so I hope. I felt frustrated with Google N-grams and Bookworm in particular. I couldn’t find useful sources that relate to my project, so I decided to try them with material related to the Sacred Heart. The results came back from both, and I noticed how skewed the results were. No texts published in Mexico between 1730-1748? Incorrect. And where were the spikes in the early nineteenth century? My excitement turned to skepticism. What was Google using to gather this data? How was it sorting it? Did accent/diacritic marks make a difference? How can I use data that is skewed, if at all? How do I know when the data isn’t skewed? I felt similarly with Bookworm. These programs definitely seem to have an inherent Anglocentrism, which is not to say that they cannot be improved to correct that in the future. But for now I don’t feel I can use them in any meaningful way.

Voyant similarly saddened me. What high hopes I had for mining my PDFs! Alas, they were dashed. Instead, I inserted by book manuscript on the Sacred Heart. While not relevant to my project, I was delighted to see how the program mined my manuscript and visualized my top words choices. See:


And another graph that Voyant generated for me displays where certain keywords are used most often in specific chapters.


Overall, I left today realizing that text mining still needs development in many areas. I also believe that it is not as relevant to many art historians because text mining doesn’t apply–at least as it was defined to day and I’m using it here–as data mining of say archival documents or images (if that’s even really possible at this time).

A major point I did take home today–thank you Lisa Rhody–is to make sure I have tidy data. After our discussion today of structured vs. unstructured data, I began to think of ways to create tidy data for my current book project on the Sacred Heart. Long ago I created a .doc that contained important events, object production dates, publication date of texts, and more that I arranged chronologically. Today I began placing it in a spreadsheet and making it into tidy data. My goal is to map this data–or at least some of it. While not related to my deathways project, this is immediately relevant to my book manuscript. I might even find that it directly affects some of my ideas.

And just for fun…

My Animoto video (I didn’t post one last week, so I quickly made one for show-and-tell):

It’s nothing fancy, but it gives you an idea of how Animoto looks as well as how it might work for a project.

While Animoto doesn’t appear to have any immediate relevance to my project, I do think it offers students a wonderful way to engage with material.

Source: Day 6: Mining Data

Day 5: Geospatial Art History & the Art of Mapping Hipsterdom

Digital Art History bootcamp ended on a high note (for me) as we delved into mapping and visualizing change over time. Before the institute started, I possessed little knowledge of mapping but knew it would be useful for my project. For example, I want to be able to show the areas affected by epidemics in sixteenth-century Mexico alongside those locations that display artworks related to death and dying. I assumed that mapping would be complicated and messy, but after Friday I learned that there are plenty of tools that are easy to use.

The Google Map Engine proved straightforward and user-friendly. I didn’t have an excel file with data for my project, so I decided to turn to the only reasonable alternative: mapping hipsters in a small part of DC (see the above embedded Google Map). While mapping hipsters doesn’t relate to my project, the data did allow me to experiment with mapping.

I was so excited by new-found ability to create easy, readable maps that I called my husband during the lunch break to walk him through the process. I thought it would be useful for him as he thinks about opening a new practice. He plans to plot potential “rivals,” transportation stations, and rent-able locations for instance.

I wasn’t sure that anything could top my excitement, but then we moved on to the New York Public Library’s Map Warper and StoryMap. I was in awe of the Map Warper. I found an eighteenth-century map of Mexico, which I georectified. This is an incredibly powerful tool made available to everyone by the NYPL. For example, the map I used allowed me to pin cities in New Spain (colonial Mexico) and a modern map of Mexico. Cities that had different names are now coordinated with their modern equivalent. The historical map “cloaks” the current one. For me this was one of the most powerful ways to demonstrate how maps lie and manipulate space. While the Map Warper wasn’t as immediately relevant to my project, it certainly does provide me with historical maps to orient people in the viceroyalty of New Spain.

StoryMap amazed me with its creative capabilities. I decided to create a story map of the spread of hipsters in Brooklyn (of course!). I’m not sure yet how I would incorporate it into my project on Mexican deathways, and I don’t know that I will. I abide by the notion that the tools and technology shouldn’t dictate the project. However, I know I will use it in my courses. It was easy and fun to use. Plus, I think it offers wonderful possibilities of engaging students with certain types of materials.

To sum up, I see great potential in using some of these mapping tools for my project. The Google Map Engine in particular will become a crucial component of visualizing some of my data. I dreaded what I thought would be a complicated and time-consuming process, but in reality it will develop much faster–the click of a few buttons really.

Source: Day 5: Geospatial Art History & the Art of Mapping Hipsterdom

Day 4: Drinking from the Firehose?

Confession: There was a moment today when I felt like I was drinking from the firehose. While the embedded video below doesn’t entirely capture the feeling, it does allow me to showcase my newfound abilities:

I left today’s institute meeting feeling like a giddy child, one who has boxes of chocolates and sugary candies and who doesn’t even know where to begin delighting in the sugary sweetness. After learning about Scalar, Omeka, and Drupal Gardens yesterday, I woke this morning hoping we’d have time to explore them further. The day turned out so much better than I imagined, as we discovered numerous tools to annotate images and videos. Thinglink became my new favorite tool (scroll over the images here). I enjoy annotating images, especially those that are unfamiliar to students (e.g., the Aztec “Calendar Stone”).

Operation_Crossroads_Baker_EditI’ve always done it in Photoshop or some other image editing software, where I draw arrows and include text, but it is always somewhat messy. Thinglink allowed me to annotate with not only text, but also links and videos. And it was easy to use. Come Fall semester, I will use this in my classes and ask students to annotate images. It will be a wonderful way to encourage close looking at images, even if it is a different type of close looking than we might normally expect. I know that I will also employ this tool for my project on Mexican deathways for the same reason that I will use it with students. I can embed these annotated images into my websites as well. For all these reasons, Thinglink ranks as my number one tool of the day.

We also explored Animoto. I must confess that I am normally dismissive of videos made from powerpoints or the like–I find them distracting. Yet Animoto has potential for both my teaching and project on Mexican deathways, particularly because it limits text, focuses on the visual, and displays a clean finished product.

Learning to annotate in Youtube was also wonderful. I feel sheepish that I didn’t know how to to this before today. And what wonderful possibilities it has. I can also download (for free) my Animoto videos and then upload them to my Youtube channel.

Surprisingly, by 4 PM I felt energized and ready to keep exploring all these new tools. They were all simple enough to guide students and colleagues who might want to use digital tools for their own projects or research.

A final note about today’s learning: the new Omeka 2.2 platform that we installed is much better than the earlier version I used yesterday. With a few changes to the hex color codes and a header image, I had a decent looking website in a couple of minutes vs. the earlier version. I plan to use this new version for the deathways project. In particular, it will work well to create exhibitions of objects.

Resources at Brooklyn College and CUNY


There are a few helpful resources at Brooklyn College beyond Artstor, WorldCat, etc. I found some excellent resources on fair use, as well as some individuals who might be willing to help me get clearances. Within the CUNY system, I imagine the CUNY Commons will use useful for making connections with people interested in digital humanities and digital art history, finding different platforms (or at least others who use them), and experimenting with other platforms and tools. I also discovered that Brooklyn College has a list of OERs that could aid my project. I didn’t find anything about specific CMS platforms, but it is possible that this information is not well represented online. I’ve no doubt that CUNY Graduate Center has amazing resources for those interested in digital humanities. Now I just need to find and access them.

Source: Day 4: Drinking from the Firehose?

Day 3: Omeka and Scalar and Drupal, Oh My!


Today we chose one of three CMS platforms with which to play, adapting to the new environments and it quirks. I chose Scalar because I wanted to have multimedia content on my site, and it seemed this CMS was the best option. While it took me longer to adjust to the platform than I expected, once I “figured out” how to use it I was impressed and amazed out what I could do. I even audibly gasped once or twice. Not only did Scalar allow me to incorporate text and video in interesting ways, but I could annotate them as well. I became somewhat frustrated at how to annotate, finding the Help Guide not all that useful for this particular component. However, Celeste guided me in the right direction, and soon enough I was making my own. This is what I wanted to learn! Long have I annotated images in Photoshop or even Apple software, using permanent arrows and text. My previous system was a necessary evil, and it bothered me that I couldn’t make these annotations invisible. Even though Scalar’s annotation tools require me to look at the image online, I can click on a permalink and–presto! A lovely annotated image or video is projected on a screen. Time passed quickly once I became immersed in the details of Scalar. While there were certainly many quirks, some of which were odd and unnecessary, I look forward to plumbing the depths of Scalar more. I think it might be a useful platform for my project and certainly for teaching. Now I need to learn Omeka and Drupal Gardens, which I imagine will offer valuable ways to see data.

A final note: it seems useful to learn CSS and HTML for Scalar. There are few design options, none of which I really liked, but if I knew CSS and HTML I could make modifications at will. I am now determined to learn both.

Source: Day 3: Omeka and Scalar and Drupal, Oh My!

Day 2 (LKE): Scraping, Sorting, and Scrutinizing Sources

After I wrote the book-length entry yesterday, I didn’t have much time to reflect on digital art history. What is it? Are their digital art histories? Is it a method, a community? Tools? As became clear yesterday, there is no one definition–at least not yet. Some of yesterday’s most fruitful conversation centered around the thresholds concepts of art history. These are the core concepts that each person must “master” (pass the threshold) to enter into the shared community of experts in our field. In small groups, we were asked to discuss the following question: What are the threshold concepts for art history that separate us from other disciplines? This involved some time travel because we had to imagine our experiences in our introductory art history courses. Each group brainstormed and listed some excellent ideas.


Today, Day 2 we’ve discussed a variety of topics. Our twitter group is the best place to follow (#doingdah14) because we are processing a lot of information. Some highlights from today include the potential for digital art history to be subversive or to provide new, alternative narratives from those in power.

We’ve also learned some fascinating information about how to harness the power of Google when searching for images, knowing about who links to our own websites, and so on and so forth. Given my obsessive (and bizarre) interest in finding public domain images, I learned some new ways in which to narrow my searches.

We also discussed Zotero, which most people had never used. As a convert of Zotero, I can say that I prefer it to Endnote and love that it is open source. However, I do find it clunky with images. I still prefer my own method: folders (e.g., “Aztec Stone Sculpture”).

Today’s homework is to “identify relevant digital repositories and consider ways to create an intentional archive of sources for our next day. Blog about it.” So here goes. I could spend days finding different repositories because of my project’s broad time span. I decided to limit myself to the sixteenth century. I began my search by looking to repositories that I’ve used in the past, like the Archivo General de la Nacion in Mexico City. I decided to make a Zotero folder for all the digital repositories so that I could quickly and easily list my findings as well as continue adding to the folder.

Here are some of the ones I think will benefit my project most:

Archives and Libraries with Primary Texts

Archivo General de la Nacion, D.F. (Scans of archival documents)
John Carter Brown Library (digital copies of printed period texts–this is incredibly useful)
Franciscana Library, Cholula, Mexico (digital copies of 114 printed period texts)
Google Books (perhaps this doesn’t count? However, I’m including it here. If I search for different subjects like “sangre de cristo” or “muerte” and limit the dates to 1521-1600, I can find so many digital copies of printed period texts. Then I can make the PDFs readable. My life–my marriage!–is indebted to Google Books)
CDC–a source for information, like this article

Image Repositories

Florentine Codex on the World Digital Library (the entire 12 volumes!)
PESSCA (for possible print sources)
Sadly, there aren’t really that many beyond searching on Artstor.


What is clear to me after searching for suitable and useful repositories is that there are some for textual materials, but few for images. I think I will have to gather the images from a wide array of sources (my own photos, books, museum websites, artstor, others). While this task is daunting, it also encourages me to make a digital project where many of these images can be found in one location.

Source: Day 2 (LKE): Scraping, Sorting, and Scrutinizing Sources