Two topics have emerged as central to Responsible Artificial Intelligence (AI): the ability to explain models' decision-making and the prevention of models' discriminatory behavior. Blackbox algorithms, in which a model's decision-making process is hidden from the user, have become increasingly popular due to their superior predictive capabilities over traditional algorithms and their interpretable models. However, the opaque nature of these models makes them ill-fitted for high-risk domains where accountability for mistakes is paramount, creating the need for explanatory tools. Additionally, AI models that are trained using real-world data frequently absorb societal biases and learn to discriminate against people based on protected attributes such as race or sex, again limiting their practical applications. This dissertation is centered on the development of Responsible AI systems, proposing foundational methods to advance the fields of interpretability and fairness. We challenge the paradigm that interpretability is a binary notion and propose a novel Mixture of Experts-based approach with partial interpretability. We demonstrate in multiple settings its ability to retain the performance of blackbox methods while increasing transparency. Further, we highlight oversights in existing fairness detection techniques. Specifically, we propose a novel fairness measure considering the intersectionality of multiple protected attributes and the domain imbalance of a regression setting. We also utilize the tools of XAI to explore the relationship between equality of predictions and explanations and propose a new multi-objective optimization approach towards a more robust fair AI system. Finally, we present a real-world application of an interpretable AI model and discuss the challenges of implementing theoretical models in a practical setting.