. . . . "The generated microtasks are then submitted to the crowdsourcing platform. When a worker accepts a microtask or HIT, she is presented with a table that con- tains triples associated to an RDF resource, as shown in Figure 1. For each triple, the worker determines whether the triple is ’incorrect’ with respect to a fixed set of quality issues Q (cf. Section 2): object incorrectly/incompletely extracted, datatype incorrectly extracted or incorrect link, abbreviated as ‘Value’, ‘datatype’, and ‘Link’, respectively. Once the worker has assessed all the triples within a microtask, she proceeds to submit the HIT. Consistently with the Find stage implemented with a contest, the outcome of the microtasks corresponds to a set of triples T judged as ‘incorrect’ by workers and classified according to the detected quality issues in Q." . . . . "2019-11-08T18:05:11+01:00"^^ . .