top of page

​​​​

Innovation Project Update

Designing a Sustainable AI-Supported Conversational Framework for Spanish Oral Proficiency

Overview of the Innovation Project

     My innovation project began as a classroom-based initiative designed to address a persistent instructional gap in secondary Spanish education: limited structured opportunities for meaningful oral communication. While students were successfully completing written assignments and vocabulary assessments, speaking remained inconsistent, anxiety-driven, and difficult to measure with clarity. Traditional classroom structures allowed only brief, public speaking opportunities, often reinforcing hesitation rather than building fluency.

​       What initially started as an exploration of AI-supported conversational tools quickly revealed a broader systems issue. The challenge was not access to technology alone; it was the absence of a structured feedback framework that allowed students to rehearse, reflect, and refine communicative performance over time. As the project evolved throughout the Applied Digital Learning program, the focus shifted from tool adoption to system design. AI-supported simulations became one component of a larger instructional framework grounded in formative assessment theory (Black & Wiliam, 2009), significant learning environments (Fink, 2013), and the COVA model (Harapnuik et al., 2015).​

     The purpose was never simply to introduce artificial intelligence into the classroom. Instead, the goal was to build a sustainable conversational ecosystem that promotes student autonomy, reduces performance anxiety, and aligns speaking practice with measurable proficiency outcomes.​

Original Innovation Plan

     The original plan centered on increasing opportunities for conversational rehearsal through AI-generated prompts aligned to thematic units. Early drafts focused heavily on tool integration and engagement strategies. While the concept was promising, the initial structure lacked clearly defined measurement checkpoints and alignment with broader instructional frameworks.​

                         Original Innovation Plan

Updated Innovation Plan

     As coursework deepened my understanding of systems thinking and instructional alignment, the innovation matured into a structured conversational framework embedded within weekly classroom routines. The updated plan clarified learning outcomes, aligned activities with ACTFL proficiency descriptors, and integrated rubric-based self-assessment and reflection cycles. This revision shifted the project from exploratory technology use to intentional instructional design supported by research and measurable goals.​

                           Updated Innovation Plan 

 

Course Integration and Real-World Application

     Throughout the Applied Digital Learning program, this innovation served as the authentic context for nearly every major assignment. Rather than completing coursework in isolation, I consistently used my Spanish classroom as the laboratory for applying theory to practice.​

     In coursework focused on learning design, I applied the principles of significant learning environments to ensure alignment between outcomes, activities, and assessments (Fink, 2013). This alignment clarified that conversational simulations must be directly connected to measurable proficiency targets rather than functioning as enrichment activities.​

                            Fink's 3 Column Table​​

     Revisiting and refining my literature review across courses strengthened the theoretical grounding of the project. Research on formative assessment reinforced the importance of embedded feedback cycles (Black & Wiliam, 2009).  Each course contributed to refining the innovation from a classroom strategy into a research-informed instructional system.​​

                              Literature Review 

​​

 

Action Research Design and Measurement Development

     As the innovation evolved, I began considering how to measure its impact more intentionally. In addition to designing the AI-supported conversational framework, I created structured lesson plans aligned to ACTFL proficiency descriptors and embedded them into weekly Canvas modules. These lessons provide guided conversational prompts, scaffolded rehearsal opportunities, and rubric-based self-assessment cycles that support measurable communicative growth.

     To support evaluation, I developed a structured speaking-confidence survey measuring students’ comfort with spontaneous communication, perceived autonomy, and willingness to take risks in Spanish. Students completed the pre-survey in January 2026 at the beginning of pilot implementation, providing baseline data on speaking anxiety and confidence levels.

     The pilot phase began in January 2026 and continued through May 2026. During this period, students engaged in consistent AI-supported conversational practice embedded within weekly instruction. Students will complete the post-survey in May 2026 at the conclusion of the pilot phase. This pre- and post-comparison will allow for structured analysis of changes in speaking confidence, perceived autonomy, and communicative growth following sustained AI-supported conversational practice.

     This shift toward intentional measurement directly influenced revisions to my original implementation outline. Rather than broadly referencing data collection, I established clearly defined checkpoints aligned to instructional phases. Embedding measurement planning into classroom routines represents a significant shift in my professional thinking, from informal observation to structured evaluation.

     Following post-survey analysis in May 2026, findings will inform instructional refinements for the 2026–2027 academic year. Moving forward, I plan to integrate rubric-based proficiency checkpoints alongside survey data to strengthen reliability and build a sustainable longitudinal tracking system.​

Original Innovation Outline Plan

Updated Innovation Outline Plan

​​

​​​​                                   

 

 

 

 

 

 

 

 

 

         

 

                             

 

​​

​​​

Current Status and Remaining Work

     The AI-supported conversational framework is currently embedded into weekly classroom routines through structured Canvas modules. Students engage in guided conversational prompts, complete rubric-based self-assessments, and submit reflective responses that inform instructional adjustments.​

 

 

 

 

 

 

 

 

 

​​​​​​​​​​​

 

 

The classroom pilot began in January 2026, and students completed the pre-survey at the start of implementation to establish baseline data on speaking confidence and perceived autonomy. The remaining work includes completing the pilot phase through May 2026, administering the post-survey, analyzing growth trends, and synthesizing findings into actionable instructional refinements.​

     Initial evaluation and data analysis will be finalized in May 2026 at the conclusion of the pilot phase. These findings will directly inform lesson refinement and instructional adjustments for the 2026–2027 academic year, as well as guide potential expansion to additional class sections.

What Worked

     Grounding the innovation in a clearly visible problem of practice, student hesitation and inconsistent oral performance, created authentic urgency. Because the challenge was experienced daily, the work was instructional necessity rather than theoretical exploration. Embedding AI-supported rehearsal within predictable weekly routines reduced anxiety and increased willingness to engage in spontaneous communication.​

     Aligning conversational practice to structured rubrics improved clarity of expectations and strengthened student ownership. Rather than relying solely on public correction, students rehearsed privately, reflected intentionally, and approached classroom dialogue with greater confidence.

What Could Be Improved

     If revisiting the initial rollout, I would establish more formalized baseline proficiency metrics prior to implementation. Although speaking anxiety was visible, more structured documentation of initial proficiency levels would strengthen longitudinal comparison.

     Additionally, earlier development of explicit AI integrity and ethical-use protocols would have streamlined expectations and clarified boundaries from the outset.​ These reflections represent professional maturation rather than deficiency. Sustainable innovation requires both structural precision and cultural calibration.

Lessons Learned

     This process reinforced that sustainable innovation must be systemic rather than tool-driven. AI alone is not innovation. True innovation occurs when instructional design, formative feedback, learner autonomy, and structured measurement align toward a clearly articulated communicative goal.

     Leadership within the classroom is less about launching initiatives and more about sustaining clarity, consistency, and reflective refinement. This project strengthened my confidence as an instructional designer and deepened my understanding of responsible technology integration grounded in research and measurable outcomes.

Audience and Purpose

     This section of my ePortfolio is intended for instructional leaders, world language educators, and digital learning practitioners seeking sustainable models for integrating AI into communicative language instruction. My purpose is not merely to document implementation, but to demonstrate how research-informed design, structured feedback cycles, and reflective practice can transform classroom speaking culture.

​​​​​​​​​Looking Forward​

     Looking ahead, I will embed structured measurement frameworks at the launch of future initiatives rather than layering them in later stages. Establishing longitudinal tracking systems will allow clearer visibility into growth patterns and support data-informed instructional decisions.

     Following post-survey analysis in May 2026, I will refine lesson structures and strengthen rubric-based proficiency checkpoints to increase reliability and clarity. Ultimately, the future of this initiative is not tied to artificial intelligence alone, but to a sustainable instructional culture that prioritizes autonomy, clarity, and authentic communication.

 

References

Black, P., & Wiliam, D. (2009). Developing the theory of formative assessment. Educational Assessment, Evaluation and Accountability, 21(1), 5–31.

Dabbagh, N., & Castaneda, L. (2020). The PLE as a framework for developing agency in lifelong learning. Educational Technology Research and Development, 68(6), 3041–3055.

Fink, L. D. (2013). Creating significant learning experiences (Rev. ed.). Jossey-Bass.

Harapnuik, D., Thibodeaux, T., & Cummings, C. (2015). COVA and creating significant learning environments. EDUCAUSE Review.

VanPatten, B., Smith, M., & Benati, A. (2020). Key questions in second language acquisition (2nd ed.). Cambridge University Press.

bottom of page