I had such high hopes to do some serious blogging over our two week Christmas break. I have about a dozen posts brewing in my head, and I’m just trying to carve out the time to get them down. But, alas, holiday cheer, family visits, and a baby who decided it was time to give up our much-cherished sleeping through the night routine intervened.
Luckily, my task for work this morning involves one of the things I wanted to share, so I’m going to take the opportunity to document the process here.
About a year ago, DTLT was tasked with leading a project to complete a “Functional Inventory” of our teaching spaces. The point of the inventory was not to document serial numbers, models, or other typical physical details about equipment, but rather to describe what kinds of things a faculty member could generally expect to find in a room (and, thus, what they could expect to be able to DO in the room). The hope was that this could become both a resource for planning our strategy for upgrading classrooms/labs as well as a tool for faculty and department chairs as they developed schedules. (We currently have no enterprise scheduling tool at UMW.)
As I tried to tackle this project, I did a survey of the landscape in terms of tools I could use to collect and present this information. In the absence of a big scheduling application, I was looking at free, open-source products, and I quickly discovered that I was probably going to have to glue several pieces together in order to do what I wanted to do. I knew I needed some kind of Web-based mechanism for collecting the data (so that various individuals around campus could easily collaborate on collecting and maintaining the information). I also knew I need a fairly intuitive interface for faculty to use to browse, filter, and use the data.
I figured the simplest, most readily available tool for collecting the data would be a Google Spreadsheet. With it’s similarities to Excel, it’s a tool just about anyone could master immediately. Unfortunately, with about 160 rooms that needed to be recorded and about 30-40 fields for each record, entering data into a flat spreadsheet interface could quickly get unwieldy (and lead to errors).
What I really needed was a way to build a form to collect the data. Right around this time, Google introduced their form feature, which looked promising. But a closer examination proved it wouldn’t suffice. Information about rooms could potentially need to be modified by several people, and we needed a mechanism to update information once it was entered. Google forms don’t allow you to edit existing records.
I knew from previous experimentation about an online database service called Dabble, and that’s where I looked next. Dabble is a pretty powerful application that essentially allows you to create relational databases. Frankly, it’s a bit of overkill for the data I needed to collect at this point. I really just needed a table of information with a clear Web form for adding and editing information (no relationships really need to be defined), but Dabble looked like the best bet.
At this point, I started to settle on how to present the information once it had been entered. Dabble has a couple of options, but none of them was really right. There is a “raw” view that allows you to edit, filter, and view the data in a more-or-less spreadsheet form, but it would be way to confusing to ask all of our faculty to navigate. There is a form/page view, but that’s really designed more for entering and editing information more easily, not filtering and viewing it — at least not the way I wanted to filter and view it. And there is also a public data view that is just a spew of everything in the table — again, not very helpful.
Well, one of the great things about working in DTLT is that I have this amazing group of colleagues who regularly introduce me to technologies that capture my imagination. So at this point, thanks to Patrick’s previous work and evangilizing, I had a feeling I wanted to use Exhibit to do the presentation of the data. I love the way Exhibit allows you to easily put a nice visual front-end onto a set of data, and I was intrigued to learn more about how to customize the presentation of that front-end.
Now I needed to figure out how to get the data out of the tables in Dabble and into Exhibit. I knew that Exhibit could consume data from various types of sources, and I knew that Dabble could export data in a variety of formats. Surely, I could find some way to get them to talk to each other?
I started with what seemed like the simplest option. According to Exhibit’s Web site, it’s possible to use an Excel spreadsheet to feed data into the application — well, Dabble exports in Excel format! But, closer examination showed that Exhibit would require me to use the Babel translator on a copied selection of data, essentially converting a snapshot of Excel data into JSON. I wanted the data to be updated in real-time and reflected in the front-end presentation. Not because the data is to variable or time-sensitive but because I know everyone gets busy and more than likely we’d forget to do the snapshot uploads as often as needed.
The other format that Dabble exports to that looked promising was JSON, which is actually the data format that Exhibit requires. But closer examination of THIS option revealed that there is no one single flavor of JSON, and the kind that Dabble spits out couldn’t easily be converted to the kind Exhibit wanted. Rats.
The final option was a Google spreadsheet. Dabble exports to Excel, and that format can be uploaded into a Google spreadsheet. Also, it’s possible to fairly easily use a Google spreadsheet as the data source for Exhibit. However, my initial assessment of this (last spring) suggested that it was still going to require someone to regularly export the data out of Dabble and upload it into the Google spreadsheet. And the process for doing so wasn’t easy — cutting and pasting the data was cumbersome and often didn’t work properly in the spreadsheet interface.
At this point, sometime in mid- to late-March, events, so to speak, overcame the project. For various reasons it got put on the backburner, with information for about 98% of the rooms collected and languishing in the Dabble database, I was more or less stuck with figuring out how to get it out of that source and into a presentation that worked and, possibly, a different data environment for better management and upkeep.
Last month, I finally turned my attention back to this conundrum, determined to figure out a solution that would meet all of my requirements. This time, I focused my attention primarily on the Google spreadsheet middle layer. I believed that there had to be a way to use this tool to do the “translation” from Dabble to Exhibit, and I finally cracked the puzzle a few days before the holiday break.
The magic comes with a function in Google spreadsheets that imports data into the table from another source. There are a variety of formats that it accepts, but the one that ended up working for me was .csv (as it happens, another format that Dabble exports into).
I plugged the following formula into the A1 cell of a table, and voila!, I had data populated into spreadsheet cells.
=ImportData(“http://mburtis.dabbledb.com/publish/umwspaces/16435eb1-4dd8-4341-958c-5bbaea120140/spacesalldata.csv”)
At this point, there were just a few remaining questions.
First, I had no idea how “live” the data would stay. The Google documentation didn’t make clear whether the formula maintained a live connection to the data source, or whether it just grabbed the data once, dumped it, and was done. To test, I left the spreadsheet open and began editing the originating Dabble database. Lo and behold, I found that after a while the spreadsheet reflected changes to records as well as new records I added.
The next question was a strange one — and a bit existential. What would happen when the Google spreadsheet was closed. Or, to be cute, if a Google spreadsheet is closed does the imported data still make a sound? I wasn’t sure if what prompted the spreadsheet data to update was dependent on me actually having it open in a browser. Since I was just planning on using Google as the middle-layer in my Dabble/Exhibit sandwich, I didn’t really want to regularly mess with (or even open) it.
So, I closed my eyes and my spreadsheet, made some more Dabble edits and waited to see what would happen in Exhibit. To my delight, Exhibit picked up on and began displaying new and edited records. Things were looking good.
The last problem I needed to solve was sort of a nit-picky one. In order for Exhibit to know what to do with the data, it needed to exist in fields that were labeled with a particular field name. Basically, the field name has to live between two curly brackets: {field1}. When I imported the data from Dabble, it was getting assigned the field names from that application and I didn’t have that generic {label} field. I guess I could have changed all the field names in Dabble, but I didn’t really want to do that — I thought it would make Dabble less user-friendly for users entering information.
So, I decided to do some transformations in the Google spreadsheet. I simply created another sheet, put the field labels I needed in the first row, and linked the cells on the second sheet to the appropriate cells on the first.
The final step in this Dabble-Google-Exhibit dance was to publish the Google spreadsheet and use it as the data source for my exhibit.
If you’ve followed along so far, well. . .wow. I’m impressed. Here are some links as a reward:
* The original Dabble DB public data view: Currently, I’m using Dabble’s default, free account. You get a limited number of users with this kind of account and your data has to be public (CC licensed). That’s fine because I didn’t need a lot of user accounts for the data entry and I’ve got no problem with the licensing because the final product is public.
* The public Google Spreadsheet : With data imported from the Dabble DB via a .cvs export.
* The Exhbit presentation: Based on the data in the Google spreadsheet (via RSS/JSON) with few modifications (fields names, etc.)
There still a bit more to do. I need to finish tweaking the Exhibit front-end, and we’ve got a bit more work to do on the data. I’d also like to create parallel presentations for labs and computer carts (we have the data in Dabble; I just haven’t gotten through the other steps).
In the end, I’m really glad I was able to figure out how to make this work. I feel like I’m using each of the tools for what they do best (for my purposes). That said, there are now multiple points of failure and it feels a bit like it’s held together with shoestrings and duct tape. Also, my testing was on a small set of data; I have no idea how this is going to hold up under the increased pressure of a larger data set. Time will tell.