@prefix dc: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "OIE output can indeed be considered structured data compared to free text, but it still lacks of a disambiguation facility: extracted facts generally do not employ unique identifiers (i.e., URIs), thus suffering from intrinsic natural language polysemy (e.g., Jaguar may correspond to the animal or a known car brand). To tackle the issue, [12] propose a framework that clusters OIE facts and maps them to elements of a target KB. Similarly to us, they leverage EL techniques for disambiguation and choose DBpedia as the target KB. Nevertheless, the authors focus on A-Box population, while we also cater for the T-Box part. Moreover, OIE systems are used as a black boxes, in contrast to our full implementation of the extraction pipeline. Finally, relations are still binary, instead of our n-ary ones. Taking as input Wikipedia articles, L EGALO [28] exploits page links manually inserted by editors and attempts to induce the relations between them via NLP. Again, the extracted relations are binary and are not mapped to a target KB for enrichment purposes."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-11-10T18:05:11+01:00"^^xsd:dateTime; pav:createdBy . }