@prefix this: . @prefix sub: . @prefix xsd: . @prefix dc: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "Web data quality assessment. Existing frameworks for the quality assessment of the Web of Data can be broadly classified as automated (e.g. [15]), semiautomated (e.g. [11]) and manual (e.g.[4,29]). In particular, for the quality issues used in our experiments, [15] performs quality assessment on links but it fully automated and thus is limited as it does allow the user to choose the input dataset. Also, the incor- rect interlinks detected require human verification as they do not take the semantics into account. On the other hand, for detection of incorrect object values and datatypes and literals, the SWIQA framework [14] can be used by utilizing different outlier and clustering techniques. However, it lacks specific syntactical rules to detect all of the errors and requires knowledge of the underlying schema for the user to specify these rules. Other researchers analyzed the quality of Web [5] and RDF [17] data. The second study focuses on errors occurred during the publication of Linked Data sets. Recently, a study [18] looked into four million RD- F/XML documents to analyze Linked Data conformance. These studies performed large-scale quality assessment on LD but are often limited in their ability to produce interpretable results, demand user expertise or are bound to a given data set."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-09-20T18:05:11+01:00"^^xsd:dateTime; pav:createdBy . }