Towards leveraging explicit negative statements in knowledge graph embeddings

Abstract

Knowledge Graphs are used in various domains to represent knowledge about entities and their relations. In the vast majority of cases, they capture what is known to be true about those entities, i.e., positive statements, while the Open World Assumption implicitly states that everything not expressed in the graph may or may not be true. This makes it difficult and less frequent to capture information explicitly known not to be true, i.e., negative statements. Moreover, while those negative statements could bear the potential to learn more useful representations in knowledge graph embeddings, that direction has been explored only rarely. However, in many domains, negative information is particularly interesting, for example, in recommender systems, where negative associations of users and items can help in learning better user representations, or in the biomedical domain, where the knowledge that a patient does exhibit a specific symptom can be crucial for accurate disease diagnosis.

In this paper, we argue that negative statements should be given more attention in knowledge graph embeddings. Moreover, we investigate how they can be used in knowledge graph embedding methods, highlighting their potential in some interesting use cases. We discuss some existing works and preliminary results that incorporate explicitly declared negative statements in walk-based knowledge graph embedding methods. Finally, we outline promising avenues for future research in this area.