According to the schedule we post on the website, the very beginning stage of this project is focusing on the information gathering. Target pictures are determined to be several famous location (famous tourist sites). Then pictures will then be used through a classification process in order to filter out those improper pictures for the next round 3D-reconstruction.
To achieve this goal, a python script was firstly developed. The achievement of the python code is to download pictures of Venice automatically from the internet and thus build up the basic database for the project. After we got the database, (which is not big enough and the gathering process will keep working in the following weeks before the accomplishment of code for classification) a parallel process, semantic query was also carried on. According to the former assumption, semantic query should be applied to construction a parallel database for the 3D-reconstruction. This may need several query key word corresponding to various tourist sites in Venice. However, we then come up with a thought to process a case study. The case study is applying the semantic query only focusing on a very specific site, (We chose Basilica di San Marco as the target) then select proper picture artificially as well as using histogram analysis via matlab and finally input the selections into 3D-reconstruction software.
The objective of this case study is to testing and verifying the robustness of the software, which may affect our threshold setting on the selection process of the clustering algorithm. For instance, if we input 30 pictures, with 5 irrelevant pictures, (With different parameters like histogram distribution, brightness, contrast ratio, illumination, etc. This will be discussed in the following paragraph ) and then the software can output an acceptable result, this may indicate the error tolerance of the clustering algorithm. Another concern of the software robustness is whether it is able to achieve a good processing of subordinate part of one picture. For instance, still, if we are working on pictures of Basilica di San Marco, we may have may pictures, and most of those will attached with various kinds of sky ( blue, cloudy, grey and so on). If this kind of secondary elements will not affect too much on the 3D-reconstruction of the main building, we might have a more flexible setup of the clustering algorithm.
Meanwhile, the histogram analysis may also executed as preprocessing method in the clustering algorithm. Besides, several different classification methods (like KNN,k-means) will then be test in the algorithm and the optimal one will be chosen. The testing and selection process will be a main discussion in the next two periodic reports.
Another important focus during the case study is the definition of improper pictures.In this stage, as we have access to pictures of a unique site, we can just focus on several parameters:
i) Histogram Distribution
With histogram distribution, we may easily figure out and get rid of those pictures underexposed or overexposed.
Brightness may help a lot to separate between those pictures taken in daytime and those at night. Also, this may help to check exposure.
Ideally, a numerical standard to evaluate improper pictures will then be carried out later when setting the threshold during the clustering algorithm.
For a further discussion, when we are talking about the 3D-reconstruction, we may not only mentioning the reconstruction of building, but also the reconstruction of the surrounding of a particular location such as the surrounding of Ponte di Rialto. This is mainly because the consideration that tourists may stand on that bridge and take photos of the surrounding of it. Though reasonable, the realization of this concept will be harder than the processing of buildings as we can’t abstract enough and coincident features from the surroundings. A feasible thought is applying those panoramic pictures into software but this barely makes sense. Thus, more complicated algorithm will be needed and discussed in the later research.