Skip to content

AI and Agentic Learning

How artificial intelligence — and especially agentic AI systems — is reshaping learning science practice, course design, and assessment.

This file is a partial scaffold. Core themes and research questions are seeded from initial research. Case studies, citations, and empirical findings should be added as the field develops.


Overview

Modern learning science is encountering a new variable: AI systems that do not merely assist learners but act on their behalf. Early AI tutoring systems (Carnegie Learning, ASSISTments) modeled learner knowledge and provided targeted practice. Large language models (LLMs) introduced AI that can explain, generate, converse, and reason in natural language. Agentic AI systems — capable of planning, searching, drafting, and revising — introduce a more fundamental shift: the AI can complete learning tasks, not just support them.

This creates new versions of long-standing learning science questions. What does metacognition look like when an AI handles the monitoring? What does deliberate practice mean when a copilot can scaffold every problem? What does assessment measure when an autonomous tool can produce the assessed artifact?


Core Themes

MOOC design, completion, engagement, and community — AI tutors and conversation agents are being integrated into MOOC platforms to compensate for the absence of instructor contact. Early results show improved persistence and comprehension in some contexts.

Online course pacing, feedback, assessment, and scaffolding — LLMs can generate immediate, personalized feedback on written work, code, and problem solutions at scale — addressing one of the most persistent gaps in online learning (the feedback loop).

Emergency remote teaching versus intentional online learning — AI-assisted tools were rapidly deployed during the pandemic period, often without adequate instructional design scaffolding, accelerating the tension between tool availability and pedagogical intention.

Equity, access, devices, bandwidth, and learner support — AI tutoring tools require connectivity and devices; the same equity gaps that shaped MOOC access shape AI-augmented learning access. Additionally, AI quality and language support varies by language and cultural context.

Mental load, belonging, and persistence — AI systems can reduce cognitive load (by handling lower-level tasks) but may also create agency fatigue (by over-automating choices, reducing learner initiative). The interaction between AI assistance and learner belonging is not yet well understood.

AI tutors, copilots, and human-AI learning loops — AI tutors (Socratic dialogue, hints, explanations) vs. AI copilots (generate-then-review, draft-then-edit) represent different learning loops with different implications for skill formation and transfer.

Agentic learning — Learners using agentic AI tools to plan their own learning, search for resources, draft responses, and revise — with the AI acting at multiple steps of the learning workflow. This is both an opportunity (personalized, self-directed learning at scale) and a risk (cognitive offloading that bypasses the productive struggle required for learning).


Research Questions

  1. Persistence and mastery — What course and AI design structures improve persistence and mastery in online learning when AI tools are available? Do AI tutors improve outcomes more for learners with lower self-regulation skills?

  2. Cognitive load and offloading — Which AI assistance patterns reduce extraneous load (good) versus which induce cognitive offloading of skills the learner should be developing (bad)?

  3. Post-pandemic permanence — What changed permanently in learner expectations after the pandemic, and how does that interact with AI tool adoption?

  4. AI assistance and skill formation — Where do AI agents support skill development (providing worked examples, explaining concepts, removing friction), and where do they weaken it (completing the work that constitutes the learning)?

  5. Assessment validity — How should assessments adapt when students use copilots and autonomous tools? What constructs can still be validly assessed, and what new assessment designs are needed?


Key Conceptual Tensions

TensionDescription
Scaffolding vs. offloadingAI support that helps a learner do what they cannot yet do alone (ZPD) vs. AI support that does it for them, preventing skill development
Agency vs. agency fatigueAI tools that expand learner agency by removing friction vs. AI systems that over-automate and reduce learner initiative
Feedback at scale vs. feedback qualityLLM-generated feedback reaches every learner instantly but may vary in specificity, accuracy, and calibration to the learner's actual misconception
Personalization vs. equityPersonalized AI learning pathways require data and access; learners without reliable devices, bandwidth, or language support are excluded

Learning Science Frameworks Most Relevant to AI-Mediated Learning

  • Zone of proximal development (Vygotsky) — AI tutors are most beneficial when targeting the ZPD; the risk is that AI lowers the floor of what learners must do independently
  • Cognitive load theory (Sweller) — AI can dramatically reduce extraneous load; the design question is whether germane load (effort that builds understanding) is preserved
  • Self-regulated learning (Zimmerman) — AI that handles planning, monitoring, or reflection phases of the SRL cycle may atrophy those capacities in learners
  • Deliberate practice (Ericsson) — Deliberate practice requires targeted effort on weaknesses with feedback; AI that always provides the answer bypasses the effortful retrieval and error correction that make deliberate practice work
  • Assessment validity — Validity theory requires that an assessment measure the construct it claims to measure; AI-assisted task completion changes what many traditional assessments actually measure

Connections to Open edX

The Open edX platform is actively developing AI-augmented features. See open-edx-platform-atlas for details on the AI & Advanced Features area (developer-platform/05-ai-advanced-features.md). Learning science principles — particularly cognitive load, scaffolding, and assessment validity — are directly relevant to evaluating and designing these integrations.

Schema Education — Internal Research