Graph Machine Learning (GML) has gained considerable attention in modeling complex graph-structured data, but many of them focus on collecting high-quality data (i.e., data-centric) and developing complex model architectures (i.e., model-centric). However, these two paradigms come with inherent limitations and challenges: data-centric approaches often demand intensive labor for tasks like data annotation and cleaning, while model-centric approaches usually require specialized expertise for model refinements. There remains a significant reservoir of unexplored potential in harnessing useful information that already exists in the data and learned by models, i.e., knowledge, as a directive force for learning.
In this dissertation, I introduce a new paradigm of machine learning on graphs: knowledge-centric. This paradigm seeks to leverage all available knowledge, which may come from data, models, or external sources, to facilitate an effective learning process. My research focuses on three different facets to obtain and leverage knowledge in GML, including learning knowledge from data, distilling knowledge from models, and encoding knowledge from external sources. By anchoring on the knowledge, there is a reduced reliance on massive data and intricate model architectures. In addition, knowledge can enhance GML models' performance, trustworthiness, and efficiency.