Venetian Cryptography 1: diplomatic documents deciphering — Progress Post 3

We have briefly sketched our plan to digitize the encrypted documents in our previous post. This time we will talk about our progress and minor adjustment to our plan and difficulties encountered while completing the tasks.

1. Binarization.
As we described in the previous post, we need to binarize the document in order to have a good segmentation result. We have chosen The OCRopus software suite for the job. Ocropus is a collection of Python-based tools for document analysis. Ocropus implements the Otsu’s method for binarization. The method is used to automatically perform clustering-based image thresholding, or, the reduction of a graylevel image to a binary image.[1]

The results we obtained are satisfying. The text remains readable and the transparency issue (see through text from next/back page because of paper aging and ink over-infiltrating the page) didn’t greatly affect the quality of the outcome.

2. Segmentation.
Again we used OCRopus software suite for the line segmentation of the document as well. It did a great job in classifying clustered text regions and separate it from empty(non-text) areas. The stages (Binarization and segmentation) run reasonably fast and are well adapted for large datasets, but a bash-type script is required to integrate and automate the whole pipeline and manage efficiently the outputs. Below extracts from the two ciphers segmented.

13119979_1188657821165601_3497273592875232923_o
For good quality pages, the binarization and the segmentation are really clean, we got really few artifacts and noise coming from page transparency
/Image p4 contain more nonsense data due to the quality of the binarized image, however it is not an issue, we could hand pick out the bad images and we believe it won’t affect our study on the digitized text. Several artifacts from page transparency :

13063340_1188657817832268_5065059028149840819_o

3. OCR (An attempt using OCRopus)
Since there is no training set/ dictionary available for ancient venetian and because our datasets are rather small, all our attempts at character recognition have been unsuccessful.

4. Baseline frequency analysis on ancient venetian language.13063299_1188657417832308_275592952046822729_o 13064673_1188657414498975_212621547881961995_oLeft chart shows the frequency analysis of documents from an embassy in 1498 [2], the right one is the analysis of diaries from 1519 [3].

We have also performed frequency analysis on a very large dataset : more than 600 pages from a diary written between 1496 and 1533.
https://archive.org/details/idiariidimarino04sanugoog

Such an analysis produces the following results :

‘I’, 11.773393901740722
‘A’, 11.386324233670095
‘E’, 10.55859904022217
‘O’, 9.210212004324612
‘R’, 7.21343112876448
‘L’, 7.161136496935078
‘N’, 6.97832876686477
‘S’, 5.500570870648508
‘T’, 5.056587956541323
‘D’, 4.35272108387453
‘C’, 3.9444455204656754
‘U’, 3.2144065006250027
‘P’, 2.659527183027735
‘M’, 2.5948169216092047
‘V’, 1.9206978974379605
‘G’, 1.3873224502890609
‘F’, 1.094661229614157
‘H’, 1.0942639294673049
‘B’, 0.9558544908077162
‘Q’, 0.7506489649586237
‘Z’, 0.6602631815497785
‘X’, 0.26196978433058155
‘J’, 0.1368202380721805
‘K’, 0.05313889464146393
‘Y’, 0.04568951688798768
‘W’, 0.03416781262927775
As we can see in the results, there are some variations in the frequencies depending on the type of text and the time period.

5. Crowdsourcing (Amazon)
In our previous post, we were planning to crowdsource the digitization work to EPFL students, and we indeed have created a google spreadsheet that embeds basic functions to have to job done. However considering there are more undigitized encrypted document out there in the archive, we think it’ll be better if we provide a functional method of solving similar problems of similar project in the future.

After doing some research, we found that the Amazon Mechanical Turk [4] service suits our needs. MTurk is a crowdsourcing platform/marketplace for companies who want to have a large amount of HITs done in a short period of time. HIT stands for Human Intelligent Tasks, which are tasks are currently difficult for computers to do. Amazon claim they have enough workforce available 24/7, jobs would be done in minutes and also customers (us) only have to pay when we are satisfied with the results.

Next task:
We will try to refine our text segmentation results, then build a good lookup table (encryption character to ASCII), next we will find out exactly what service Amazon MTurk provide, all the details and regulations. If everything turns out to be fine, we’d try to get approval from the professor and execute the crowdsourcing task on Amazon.

We also want to get a useful document concerning the encryption methods and techniques used in this period. Which is sadly only available in paper at the french national library, we are planning to attempt to retrieve this document and we are currently getting in contact with the institution.

Chr. Villain-Gandolfi, Les dépèches chiffrées de Vettore Bragadin, baile de Constantinople, in “Turcica”, IX/2-X (1978), pp. 56-106.

https://en.wikipedia.org/wiki/Otsu%27s_method
http://www.storiadivenezia.net/sito/testi/1489%20Trevisan.pdf
http://www.storiadivenezia.net/sito/testi/1519%20Giustinian.pdf
https://www.mturk.com/mturk/welcome

Team 1 :

Vallée Josselin

Chong Han

Held Jeremy