Austrian Institute of Technology
Speaker: Oliver Zendel, researcher at Austrian Institute of Technology, Vienna, Austria
The rising dominance of deep learning has resulted in a focus on data engineering to solve computer vision (CV) tasks. The quality and thoroughness of training datasets is cruicial for the creation of robust CV solutions. Both deep learning and classical algorithms need comparable scrutiny when creating test datasets. Algorithm robustness can only be successfully evaluated by using challenging test data. Both training and test data thus requires quality control but methods to evaluate the quality of datasets (e.g. there completeness) are sparse.
In this talk I will present our work on determining the quality of existing test data sets: CV-HAZOP. The talk will give a short introduction to the topic and give insights into how to apply these methods to create datasets for an actual CV algorithm. In addition, the presentation will give an overview about the upcoming CVPR 2018 Robust Vision Challenge where we are trying to reduce dataset bias introduced by the current state-of-the-art in benchmarking. Finally, the new AIT dataset for semantic and instance segmentation WildDash is presented. This dataset and benchmark was created specifically with CV-HAZOP in mind to create a diverse and challenging dataset. It allows to measure robustness of algorithms with regard to individual hazards (like blur, overexposure and rain) by calculating the impact of each hazard on your algorithm's performance. For more information see here.