Several governments are starting to publish open data: datasets generated by the goverment, made freely available for citizens to use for value-added app development, analysis, and feedback. For instance, the City of Vancouver (Canada) Open Data Catalogue publishes 130 datasets. The subject matter ranges from tabular files of city councillor contact information to geographical datasets of zoning districts. Formats range from Comma-Separated Value (CSV) to SHP to KML and beyond.
It would be nice for each of these open data portals to have a dataset of datasets: their catalogue of datasets in structured data file form. The catalogue dataset should have metadata describing each dataset (name of dataset, URL of download page, the formats the dataset is available in, maybe a description of the dataset format and attributes or URL to same.
What is a good data model and a good format for such a catalogue dataset? If this is a solved problem, I’d like to suggest that Vancouver reuse that solution, instead of inventing one.
Update: in response to the question, why is it desirable to to have a catalogue as a structured dataset, I can think of three classes of use case.
-
Analysis across all the datasets of a data provider. It is convenient to get a list of all datasets, with links to descriptions etc., which I can import into a spreadsheet and annotate. Someone else may want to count total number of records published, or breadth of government activity covered by the data. In my case, I’m working on a Vancouver Open Data language census.
-
Analysis of corresponding datasets across multiple data providers. For instance, one might want to aggregate a list of all zoning boundary datasets published by Canadian cities. That is easier if one can sift through dataset lists by machine instead of by hand.
-
Analysis of dataset catalogue change over time. It might be interesting to analyse the growth in Open Data from one year to the next. Structured catalogues make this easier to automate.
2
I think you are talking more about a presentation problem (a coherent list of all the data they provide) rather than a data-modelling problem.
Any “OK” software developer, with at least somewhat matured language will be able to process CSV, XML, JSON, any hue of different kind of data formatmodel.
Why do you think that this data should be structured? Who and for what would someone need such data in a machine-readable format?
It’s not like open data with thousands of rows, tables, etc, is a data you need to crawl or find automatically. Finding the proper “download link” as you say is a single-time thing. Why should it be automatedstructuredmachine-readable?
Ask them to make a list with links and descriptions and make it a webpage, what more do you need?
2
Suggest looking at http://spec.datacatalogs.org.
I also note that Data Catalog / Portal software like http://ckan.org/ (which powers e.g. data.gov.uk) provides an API for getting all the datasets in a standard form.