Advances in artificial intelligence (AI) have opened new opportunities for human intelligence augmentation (IA)—the use of AI to enhance, rather than replace, human cognitive and social capabilities. Designing AI-assisted IA systems, however, poses unique challenges, particularly in open-ended, exploratory contexts where users’ goals may be dynamic, multi-dimensional, or implicit. Conventional AI systems often rely on explicit user input, limiting their responsiveness to users’ fluid and evolving cognitive needs. This dissertation investigates how we might design interactive, AI-powered systems that offer adaptive, context-sensitive, and on-demand support for complex cognitive tasks. I present four adaptive AI interfaces that augment different facets of human cognition, spanning two primary domains: (1) AI-supported education, where I design systems that scaffold learning and interaction in informal and classroom contexts; and (2) AI-assisted data work, where I develop semi-automated tools that learn from users to streamline workflows and support analytical decision-making. These systems support tasks such as storytelling, argumentative writing, audiovisual data annotation, and collaborative sensemaking. In addition to system development, I report a field deployment of an AI-assisted writing system (VISAR) in a first-year undergraduate writing course, analyzing usage patterns and user feedback to assess real-world integration. Finally, I introduce the CAAI (Cognition-Aligned Adaptive Interface) framework, which articulates core cognitive dimensions for system design and proposes complementary evaluation methods, including a self-report instrument and quantitative behavioral metrics. Together, this dissertation contributes new technical systems, empirical findings, and conceptual guidance for designing adaptive AI interfaces that align with and enhance human cognitive processes.<p></p>