You are here
The KnowledgeStore: Despite the widespread diffusion of structured data sources and the public acclaim of the Linked Open Data initiative, a preponderant amount of information remains nowadays available only in unstructured form, both on the Web and within organizations. While different in form, structured and unstructured contents speak about the very same entities of the world, their properties and relations; still, frameworks for their seamless integration are lacking. The KnowledgeStore platform is a scalable, fault-tolerant, and Semantic Web grounded storage system to jointly store, manage, retrieve, and semantically query, both structured and unstructured data. The KnowledgeStore plays a central role in the NewsReader EU project: it stores all contents that have to be processed and produced in order to extract knowledge from news, and it provides a shared data space through which NewsReader components cooperate.
The contextualized knowledge repository: (CKR) is a knowledge representation and reasoning platformbuild on Semantic Web technologies, that enables the, storage of contextualized knowledge, i.e. knowledge that holds in specific circumstances. The CKR supports query answering and reasoning on context sensitive knowledge. The CKR addresses an arising needs in the Semantic Web, where as large amounts of Linked Data are published on the Web, it is becoming apparent that the validity of published knowledge is not absolute, but often depends on time, location, topic, and other contextual attributes. Applications of CKR platforms includes the representation and reasoning on metadata like provenance, access control and trust withing the planetdata eu project.
In the past we have developed methods for the implementation of the following knowledge services:
- Semantic matching
- Semantic matching is the service for automatic discovery of semantic relations, or a mapping, between heterogeneous elements of different ontologies. We have contributed to the construction of the CtxMatch system which was the first system which encodes the idea of matching as a logical problem.
- Similarity service
- The similarity service takes two elements belonging to the same ontology and computes semantic distance between them. The distance is typically a numeric measure of similarity ranged from 0 and 1. The major number of them are implemented in the SimPack tool to be investigated.
- Normalization service
- Normalization is the process of adding a (appropriate set of) linguistic senses to the elements of a schemata In particular, normalization service can be used to add linguistic senses and commonsense axioms which are present in linguistic repositories (e.g., WordNet) to concept names of an ontology. Such addition allows for discovery of misused concept names, i.e. when those contradict a common sense knowledge contained in WordNet.
- Semantic look up
- Semantic look-up is a service searches for the most similar relations or concept in an a set of ontologies. looks-up service takes a search criteria, such as, series of keywords, topic, etc. and locates ontologies satisfying the specified criteria. The approach to the search consists in enriching ontologies in the pool by utilization of normalization service and further indexing enriched ontology.
- Instance migration
- Instance migration service allows the migration of instances between heterogeneous overlapping ontologies. Such a need arises when one wants to reclassify a set of instances of one ontology into a semantically related target ontology.
- Mapping debugging/repairing
- Debugging/repairing service addresses the problem of errors in a mapping between given ontologies and automatically fixing them. Erroneous mapping cause inconsistencies in ontologies and hence are required to be repaired.