@prefix this: . @prefix sub: . @prefix xsd: . @prefix dc: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "In this work, we crowdsource three specific LD quality issues. We did so building on previous work of ours [43] which analyzed common quality prob- lems encountered in Linked Data sources and classi- fied them according to the extent to which they could be amenable to crowdsourcing. The first research ques- tion explored is hence: RQ1: Is it feasible to detect quality issues in LD sets via crowdsourcing mecha- nisms? This question aims at establishing a general un- derstanding if crowdsourcing approaches can be used to find issues in LD sets and if so, to what degree they are an efficient and effective solution. Secondly, given the option of different crowds, we formulate RQ2: In a crowdsourcing approach, can we employ unskilled lay users to identify quality issues in RDF triple data or to what extent is expert validation needed and desirable? As a subquestion to RQ2, we also examined which type of crowd is most suitable to detect which type of quality issue (and, conversely, which errors they are prone to make). With these questions, we are interested (i) in learning to what extent we can exploit the cost- efficiency of lay users, or if the quality of error detec- tion is prohibitively low. We (ii) investigate how well experts generally perform in a crowdsourcing setting and if and how they outperform lay users. And lastly, (iii) it is of interest if one of the two distinct approaches performs well in areas that might not be a strength of the other method and crowd."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-11-10T12:34:11+01:00"^^xsd:dateTime; pav:createdBy . }