@prefix dc: . @prefix this: . @prefix sub: . @prefix xsd: . @prefix prov: . @prefix pav: . @prefix np: . @prefix doco: . @prefix c4o: . sub:Head { this: np:hasAssertion sub:assertion; np:hasProvenance sub:provenance; np:hasPublicationInfo sub:pubinfo; a np:Nanopublication . } sub:assertion { sub:paragraph c4o:hasContent "In this scenario, the encyclopedia Wikipedia contains a huge amount of data, which may represent the best digital approximation of human knowledge. Recent efforts, most notably DB PEDIA [23], F REEBASE [8], YAGO [21], and W IKIDATA [31], attempt to extract semi-structured data from Wikipedia in order to build KBs that are proven useful for a variety of applications, such as question answering, entity summarization and entity linking (EL), just to name a few. The idea has not only attracted a continuously rising commitment of research communities, but has also become a substantial focus of the largest Web companies. As an anecdotal yet remarkable proof, Google acquired Freebase in 2010, 1 embedded it in its K NOWLEDGE G RAPH , 2 and has lately opted to shut it down to the public. 3 Currently, it is foreseen that Freebase data will eventually migrate to Wikidata 4 via the primary sources tool, 5 which aims at standardizing the flow for data donations."; a doco:Paragraph . } sub:provenance { sub:assertion prov:hadPrimarySource ; prov:wasAttributedTo . } sub:pubinfo { this: dc:created "2019-11-10T12:34:11+01:00"^^xsd:dateTime; pav:createdBy . }