Tag Archives: machine language

Recognizing copies of paintings


Assume, you are watching two similar paintings. How do you know they are similar? What features convince you in that? For human that might be an easy task, but what about machines? Can we translate the way a human brain compares pictures into machine language? The main purpose of this project is to tackle these questions using a computational methodology approach. Obviously, the whole process of thinking can not be literally simulated, however it could be possible to find out the most important key-moments people pay attention to, while performing comparisons. One of the fundamental aesthetic concepts related to human visual perception is the shape of elements and how the latters can be associated to simple geometrical compositions. For this reason we propose a comparison method based on representative structures and patterns potentially identified in paintings.


The deliverables of this project will be

  • an interface that allows to :
    • load paintings
    • identify skeletons of elements and global structures
    • extract the skeletons data to process them
  •  an algorithm that is able to find local and global similarities between paintings



Since we are interested in comparing different paintings we will implement a skeletons-based method, composed by three main steps. First of all, by using plain undirected graphs (trees), a set of skeletons will be extracted from given paintings. Each picture will be decomposed into elements (for example, figures of human bodies) and a corresponding structure will be defined for each one of them. Secondly, we will design a method in order to compare pairs of skeletons and consequently to assess whether they are similar or not. This algorithm will take advantage of machine learning techniques to make predictions or decisions, rather than sequentially following only programmed instructions. The third step aims at improving our analysis by focusing on the general structure of the pictures.

Autunno_bassanesco_torino collezione privata segnalato nel 1085
Figure [1]: Autunno, Torino private collection
Autunno_anonimi bassaneschi4
Figure [2]: Autunno, 1959

The amount of available paintings is clearly unbounded and consequently working on a limited database of pictures becomes of primary importance. Our dataset contains a total of 21 art works by different painters. Since they represent typical scenes and subjects of the Italian Renaissance, they presumably belong to this artistic and cultural movement. This period is chosen because its paintings are characterized by well-defined shapes, easily extractable from the ensemble.

The dataset will be split into two groups: the training and the testing part. The former will be used to configure the algorithm and estimate the needed parameters, while the latter will be necessary to perform a cross-validation technique and therefore analyze the quality of the implemented program.


Step 1: Creating skeletons

The first fundamental step is to define elements we are going to focus on. Analyzing the Italian Renaissance period it could be observed that groups of people are the most important in paintings. Therefore we consider only humans as relevant elements. Building a skeleton structure on a human body is a very natural mental process that will let us finding a simple similarity “protocol”.

A list of rules must be determined in order to define skeletons consistently and univocally. The complete human structure will consist of 13 vertices (represented as pairs of coordinates on the picture) corresponding the following parts of human body: head, chest, shoulders, elbows, hands, basin, knees  and feet. The set of edges is: (head – chest), (chest – left shoulder), (chest – right shoulder), (left shoulder – left elbow), (right shoulder – right elbow), (left elbow – left hand), (right elbow – right hand), (chest – basin), (basin – left knee), (basin – right knee), (left knee – left foot) and (right knee – right foot). It could be noted that some of the elements may have non-complete skeletons, i.e. some of the vertices or edges might be missing (Fig[3]). This could happen when people are partially covered by other objects; only connected vertices will be taken into account.


Step 2: The main algorithm: comparing skeletons

In this section we present the algorithm, which our similarity method is based on. The main idea is to compute the Euclidean distance between corresponding vertices of different skeletons and, by comparing it to a previously settled threshold, decide whether they are similar or not.

The program will be built on the following structure.                                                                             Given two skeletons they will be defined as S1 = (V1, E1) and S2= (V2, E2), where V1 and V2 represent two set of body parts while E1 and E2 are the corresponding groups of edges. In order to achieve an effective comparison the two skeletons must have the same number of vertices not necessarily related to the same body parts. This allows the program to detect the potential presence of symmetric structure. In order to remove zooming-problems an edges length normalization process is needed. Afterwards the two obtained structures are overlapped on a well-defined and common vertex and a relative rotation, centered in the above-mentioned vertex, is performed.

The effectiveness of this method depends on how much accurate the performed rotation will be. To achieve this level of precision we settle a small step-angle, and by using a cycle it is possible to find the “maximum-similarity” configuration. The loop is indeed stopped when a 360° rotation is performed and all the distances have been calculated. The sought value corresponds to the global minimum of the controlling-function. This function will be based on the standard Euclidean distance calculation between corresponding vertices, and by summing up all the different contributions it will represent an overall-inequality parameter. Following this procedure the analyzed skeletons are identified to be similar only if the minimum computed distance is lower than a previously evaluated threshold. This value is determined thanks to the use of the training dataset to which the algorithm will be strictly related.

Figure [3]: skeletons
Step 3: Global Similarity

This entire procedure is fundamental in order to identify similarities between skeletons extracted by single elements and therefore it is not sufficient to achieve the final goal: verify the global similarity of given paintings. In order to accomplish this purpose it is possible to repetitively apply the described procedure to all the skeletons in the pictures and estimate how many pairs satisfy the similarity requirement. Since this approach is a local analysis confined only to single elements, a whole geometrical pattern should be tested in order to obtain information about the general distribution of the studied subjects. These total patterns are easily defined by simply using the coordinates of vertices referred to heads. In this way a more accurate analysis can be reached both performing a local and a global comparison.



  • Week 1: literature review regarding creating skeleton (Kinect Camera) and machine learning techniques.
  • Week 2-4: working on dataset and implementation of skeleton identification code.
  • Week 5: creating the skeleton database.
  • Week 6-10: implementation of the main similarity algorithm (local and global analysis).
  • Week 11: configuration of the program using the training dataset.
  • Week 12: validation of the program using the testing dataset.
  • Week 13-14: checking the results and final presentation.



  • Toward automated discovery of artistic influence, Babak Saleh, Kanako Abe, Ravneet Singh Arora, Ahmed Elgammal, 14 August 2014
  • Pattern Recognition & Machine Learning, Y. Anzai, 1989


Group members: Alessio Santecchia, Daniyar Chumbalov and Mattia Bergaglio