Research


Advancing relational AI, institutional integrity, and human-centered learning systems.

This research examines how AI systems reshape agency, authority, and responsibility across educational, civic, and governance contexts. The work integrates relational theory, institutional analysis, and design frameworks to clarify how emerging technologies influence human judgment over time.

Current projects include manuscripts in review, working papers, and applied theoretical models that inform the broader ecosystem of frameworks and learning systems presented throughout this site.


Research Domains


1. Relational AI & Institutional Integrity

This domain investigates how AI systems redistribute interpretive authority within institutions. It examines relational integrity, legitimacy under algorithmic influence, and governance readiness across educational and civic systems.

Projects in this area include work on Relational Integrity in AI (RIAF), institutional legitimacy under algorithmic authority (SCIS), and structural analyses of responsibility relocation in AI-mediated environments.

2. Learning Ecosystems & Human Development

This research explores AI as co-regulatory infrastructure within learning systems. Rather than framing AI as tutor or replacement, this work situates systems within distributed networks of learners, educators, and institutional design.

Key contributions include the AI as Co-Regulator manuscript, interpretive calibration models, and theoretical foundations for human-centered AI literacy and youth readiness pathways.

3. Applied Relational Systems & Simulation

This domain develops and tests dialogical systems, governance simulations, and reflective AI environments. It examines how relational depth and institutional safeguards can be embedded into practical tools and simulated decision environments.

Projects in this area inform the development of simulation labs, relational AI systems, and governance testing environments that translate theory into practice.


Current Manuscripts


Relational Integrity in AI: Preserving Human Agency, Accountability, and Meaning Under Pressure

SSRN

Preprint / In development

RIAF formalizes a framework for assessing relational erosion and authority displacement in AI-mediated systems. It identifies staged patterns of relational drift and proposes institutional safeguards for preserving interpretive agency.


Legitimacy Under Algorithmic Authority: A Relational Diagnostic of Leadership Education in AI-Mediated Contexts

Wiley > New Direction for Student Leadership

In review

This paper introduces a structural lens for examining how algorithmic systems reshape authority and legitimacy in leadership development contexts. Drawing from institutional theory and relational AI models, it proposes diagnostic tools for evaluating responsibility relocation and structural drift.


AI as Co-Regulator: Relational Design for Strengthening Self-Regulated Learning

Frontiers in Education > Digital Learning Innovations > Harnessing AI to Support Self-Regulated Learning in Educational and Workplace Settings

In review

This manuscript develops a relational model of AI as co-regulatory infrastructure within learning environments. It integrates self-regulated learning theory with relational AI design principles to propose structural safeguards that preserve learner agency while enhancing interpretive capacity.


Additional Working Papers


Emerging work includes:

  • Responsible Stewardship Innovation
  • Interpretive Calibration in AI-Mediated Systems
  • Manuscripts in Preparation

Authority Migration and Algorithmic Legitimacy in AI-Integrated Knowledge Work (TATuP)

Operationalizing Responsibility Under Infrastructural AI: A Relational Diagnostic Framework (Journal of Responsible Innovation)

Belonging as Architecture: Interpretive Calibration and the Structural Conditions of Care (Leonardo)

Select preprints and drafts are available upon request.


Methodological Commitments


This research integrates:

  • Institutional and socio-technical systems theory
  • Learning sciences and self-regulated learning models
  • Relational AI frameworks
  • Responsible innovation principles

Where appropriate, conceptual models are paired with applied simulations, design pilots, and governance testing environments to examine structural effects over time.