High values cargoes’ networks

I. Introduction and Project Goal

Venice was a key trading hub for many high value cargoes. From the trading records in Venice, historians could be able to infer much information about the past. We focus ourselves to silk and slave because their significance in historical studies.

Our project goal is to create an interactive data visualization interface for researchers to understand the network of trades, and also a data sourcing mechanism that can be used to extend our high value cargo dataset, such as extra trading routes, price of goods, etc, by common users with no prior experience in programming. With the combination of these 2 parts, we believe that the final product of this project will be a prototype of a useful tool for historians to investigate and to demonstrate networks of the high value cargos and also other relevant information.

II. Details of the Implementation

1. System Structure
The system consists of 4 main components and is shown in the following figure. They are “Interactive Data Visualization,” “Data Sourcing,” “Data Processing,” and “Data Files.” The purpose of “Interactive Data Visualization” is to present all the data in an interactive way. “Data Sourcing” has the role of letting users to input extra data. “Data Processing” component is responsible for processing the new data passed from “Data Soucring” in a structured way. Then the structured data are updated into the “Data File” component which has three JSON files recording cities, trading routes, and historical events respectively.

Figure 1.  System Structure

2. Visualization Components
We have 3 main components in our data visualization. They are closely bonded with each other and are all realized with HTML, CSS, and Javascript.

a. The Control Panel
In the control panel, we have a set of buttons for the user to select the cargo of interest, and a time slider to select the period he/she wants to investigate. The upper bound and the lower bound of the time span, and the granularity of time slices are dynamically decided through the input dataset of cargos. It is also possible to select multiple cargos at the same time to generate an aggregated graph if their time spans overlap.

b. The Historical Event Displayer
It is important for the historians to connect historical events to the change of the routes. We designed a dynamic text displayer which shows crucial historical events about trading for the cargo and the time period selected via the Control Panel. Another feature of this dynamic historical event displayer is key-word highlighting. In our application, the most important key words are the names of cargos and cities on the routes. They are highlighted in different colors which are also used to highlight city nodes on the map.

c. The Dynamic Map of Trading Network
The dynamically generated map which includes cities and trading routes is realized with Data-Driven Documents (d3.js) and TopoJSON. The map data in JSON format is downloaded from mbostock’s topojson examples[1] and is built by the World Atlas and U.S. Atlas projects. The world map is rendered with uniform color in order to have a clear view of the trading network upon it. We select the “equirectangular projection” for the map because it’s simple to understand and is also a de facto standard for GIS applications. [2] An extra functionality is that the map is zoomable and traversable with mouse clicks.

As for the trading network, cities are presented in nodes. The cities described in the historical event are marked with different colors for emphasis. The name and the country of a city will be shown when user’s mouse moves over the node.

Figure 2. The Interactive Dynamic Map and Network of Silk inside Bootstrap framework

III. Data Sourcing

1. Existing Data Sources
The main sources of data we collected for the project are listed as follows:

a. A research paper titled as “Domestic Slavery in Renaissance Italy,” and written by Mrs. Sally McKee who is a professor in University of California at Davis. The data source presented in this paper is about two thousands slaves and is from the State Archives of Venice between 1360-1499. These data were shown in tables with respect to “Number of Slave Sale Contracts by City,” “Number of Slave Sale Contracts by Decade,” “Origins of Slave Sold,” “Average Age of Slaves by Gender,” and “Average Price of Female and Male Slaves in Sale Contracts by Decade.”

b. The second source is from the website Silk Route.net. This website has useful information about the routes of silk, slave, and spice. The cities along the route of silk are extracted from a map provided by it. Since the map is of ancient source, some cities are not able to identified today. And there is no clear explanation about the period of the map. Therefore, we can not plot the routes of different time periods on our visualization.

c. The website, A Chronology of the Silk Road, records some critical historical events started from 3000 B.C. to 2010 A.D. This website is an excellent assistance for building the historical events for the visualization.

d. The last website, Slave Voyages, has details of the slave route and all the statistics between 1514 to 1866. This resource is really abundant. Unfortunately, their database does not have trading routes to Venice. Nevertheless, this website is of great help for researchers who are interested in the slave trade between 1514 and 1866 and cities other than Venice.

2. Data Sourcing Mechanism
During our data collecting process, we understood that it is really hard to get quantitative trading data on the Internet. The most of the trading records are not digitalized and are stored in the State Archives in Italy where we have no way to access. Since detailed information about silk and slave for the Republic of Venice is sparse, we have implemented an interface which allows users to input the data they have in the future in order to compensate the lack of valid data at this time. We expect the missing pieces will be discovered by the completion of the Venice Time Machine Project.

The interface is composed of a pull-down menu of three types of cargo, the time period, the price and quantity, the routes, and the corresponding historical events. Then all of these data are saved in the data files, and the website would be refreshed with the updated source. The following  figure shows the page of data sourcing mechanism.

data sourcing mechanism
Figure 3. Data Sourcing Webpage

[1] https://github.com/mbostock/topojson/tree/master/examples
[2] http://en.wikipedia.org/wiki/Equirectangular_projection