http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#Head
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://www.nanopub.org/nschema#hasAssertion
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#assertion
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://www.nanopub.org/nschema#hasProvenance
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#provenance
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://www.nanopub.org/nschema#hasPublicationInfo
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#pubinfo
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://www.w3.org/1999/02/22-rdf-syntax-ns#type
http://www.nanopub.org/nschema#Nanopublication
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#assertion
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#paragraph
http://purl.org/spar/c4o/hasContent
In this work, we crowdsource three specific LD quality issues. We did so building on previous work of ours [43] which analyzed common quality prob- lems encountered in Linked Data sources and classi- fied them according to the extent to which they could be amenable to crowdsourcing. The first research ques- tion explored is hence: RQ1: Is it feasible to detect quality issues in LD sets via crowdsourcing mecha- nisms? This question aims at establishing a general un- derstanding if crowdsourcing approaches can be used to find issues in LD sets and if so, to what degree they are an efficient and effective solution. Secondly, given the option of different crowds, we formulate RQ2: In a crowdsourcing approach, can we employ unskilled lay users to identify quality issues in RDF triple data or to what extent is expert validation needed and desirable? As a subquestion to RQ2, we also examined which type of crowd is most suitable to detect which type of quality issue (and, conversely, which errors they are prone to make). With these questions, we are interested (i) in learning to what extent we can exploit the cost- efficiency of lay users, or if the quality of error detec- tion is prohibitively low. We (ii) investigate how well experts generally perform in a crowdsourcing setting and if and how they outperform lay users. And lastly, (iii) it is of interest if one of the two distinct approaches performs well in areas that might not be a strength of the other method and crowd.
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#paragraph
http://www.w3.org/1999/02/22-rdf-syntax-ns#type
http://purl.org/spar/doco/Paragraph
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#provenance
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#assertion
http://www.w3.org/ns/prov#hadPrimarySource
http://dx.doi.org/10.3233/SW-160239
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#assertion
http://www.w3.org/ns/prov#wasAttributedTo
https://orcid.org/0000-0003-0530-4305
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc#pubinfo
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://purl.org/dc/terms/created
2019-11-10T12:34:11+01:00
http://purl.org/np/RA6kW-GshS-gaAhtf42o5v0xuQ6I4oDmcIVSTBom_YKJc
http://purl.org/pav/createdBy
https://orcid.org/0000-0002-7114-6459