Visual Information Processing Group
Home Members Publications Projects Events Resources Ph. D.

Scalable and Efficient Learning from Crowds with Gaussian Processes


Pablo Morales-Álvarez, Pablo Ruiz, Raúl Santos-Rodríguez, Rafael Molina, and Aggelos K. Katsaggelos, “Scalable and Efficient Learning from Crowds with Gaussian Processes”, Information Fusion, 2019, in press. DOI:10.1016/j.inffus.2018.12.008


Over the last few years, multiply-annotated data has become a very popular source of information. Online platforms such as Amazon Mechanical Turk have revolutionized the labelling process needed for any classification task, sharing the effort between a number of annotators (instead of the classical single expert). This crowdsourcing approach has introduced new challenging problems, such as handling disagreements on the annotated samples or combining the unknown expertise of the annotators. Probabilistic methods, such as Gaussian Processes (GP), have proven successful to model this new crowdsourcing scenario. However, GPs do not scale up well with the training set size, which makes them prohibitive for medium-to-large datasets (beyond 10K training instances). This constitutes a serious limitation for current real-world applications. In this work, we introduce two scalable and efficient GP-based crowdsourcing methods that allow for processing previously-prohibitive datasets. The first one is an efficient and fast approximation to GP with squared exponential (SE) kernel. The second allows for learning a more flexible kernel at the expense of a heavier training (but still scalable to large datasets). Since the latter is not a GP-SE approximation, it can be also considered as a whole new scalable and efficient crowdsourcing method, useful for any dataset size. Both methods use Fourier features and variational inference, can predict the class of new samples, and estimate the expertise of the involved annotators. A complete experimentation compares them with state-of-the-art probabilistic approaches in synthetic and real crowdsourcing datasets of different sizes. They stand out as the best performing approach for large scale problems. Moreover, the second method is competitive with the current state-of-the-art for small datasets.


  • Two novel scalable and efficient methods to learn from crowds are proposed.
  • Their computational training cost scales up linearly with the training set size.
  • Their computational test cost is independent on the training set size.
  • They are applied in previously-prohibitive datasets and exhibit great performance.
  • Both approaches accurately estimate and fuse the expertise of all the annotators.
  • Datasets

    The proposed method is evaluated on four datasets. The "sphere" and "cubes" datasets can be downloaded here and here, respectively. The "music genre" and "sentence polarity" datasets can be downloaded from the author's website.

    MATLAB code

    The proposed methods can be downloaded in the following links: RFFGPCR and VFFGPCR. Among the methods which they are compared to in the paper, we provide our own implementations for Yan and Raykar. For GP-MV, we provide our own implementation of GP for classification here. Finally, an implementation for Rodrigues can be found in the author's website.

    Software also available in GitHub:


    The programs are granted free of charge for research and education purposes only. Scientific results produced using the software provided shall acknowledge the use of the implementation provided by us. If you plan to use it for non-scientific purposes, don't hesitate to contact us.

    Because the programs are licensed free of charge, there is no warranty for the program, to the extent permitted by applicable law. except when otherwise stated in writing the copyright holders and/or other parties provide the program "as is" without warranty of any kind, either expressed or implied, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The entire risk as to the quality and performance of the program is with you. Should the program prove defective, you assume the cost of all necessary servicing, repair or correction.

    In no event unless required by applicable law or agreed to in writing will any copyright holder, or any other party who may modify and/or redistribute the program, be liable to you for damages, including any general, special, incidental or consequential damages arising out of the use or inability to use the program (including but not limited to loss of data or data being rendered inaccurate or losses sustained by you or third parties or a failure of the program to operate with any other programs), even if such holder or other party has been advised of the possibility of such damages.

    Visual Image Processing
    University of Granada