"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"

Five papers accepted to ISLS 2024 Annual Meeting

Five papers accepted to ISLS 2024 Annual Meeting

April 12, 2024

We're excited to announce that five of our papers have been accepted for presentation at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, taking place from June 10th to 14th. These papers, stemming from our NSF AI ALOE and NSF SaTC projects, touch on our work related to AI-augmented concept learning, private AI curriculum in computer science education, AI-augmented summarization, and the evaluation of learner comprehension through AI techniques. We look forward to sharing our findings!

Bae, Y., Kim, J., Davis, A., & Kim, M. (accepted). A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.

Haddadian, G., Panzade, P., Takabi, D., & Kim, M. (accepted). Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.

Kim, J., Bae, Y., Stravelakis, J., & Kim, M. (accepted). Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.

Kim, J., Lee, T., Bae, Y., & Kim, M. (accepted). A comparison between AI and human evaluation with a focus on generative AI. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.

Kim, M., Kim, J., Bae Y., Morris, W., Holmes, L., & Crossley, S. (accepted). How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences. 

Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.

Dr. Min Kyu Kim presented about the theoretical underpinnings for the SMART project

Dr. Min Kyu Kim presented about the theoretical underpinnings for the SMART project

April 1, 2024

Dr. Kim Kyu Kim, our director, presented the theory-driven development and research for SMART at today's AI-ALOE Foundational and Use-Inspired AI Meeting.

Dr. Kim began by outlining the project's aim of aiding learners in understanding key concepts. He elaborated on how theories such as Personalization, Community of Inquiry (COI), and ICAP (Interactive, Constructive, Active, Passive) have influenced SMART's design. Additionally, Dr. Kim shared our research questions and data collection methods within the SMART project. He also raised the question of having shareable instruments among the ALOE research teams and proposed the development of shareable theoretical frameworks to facilitate collaboration and research involving multiple AI tools. Dr. Kim led discussions on external threats and opportunities relevant to our work within the AI-ALOE as well.

Welcoming our visiting scholar: Sua Im!

Welcoming our visiting scholar: Sua Im!

March 31, 2024

We are pleased to welcome Sua Im to our lab as a visiting scholar!

Sua Im has joined our lab as a visiting scholar through the Outstanding Graduate Students’ Overseas Training Program (BK21 FOUR Project) sponsored by The National Research Foundation of Korea. Sua holds bachelor's degrees in French language and literature and in sports industry, as well as a master's degree in sports industry, all from Yonsei University, South Korea. She is currently advancing her Ph.D. studies at Yonsei. Her research focuses on various aspects of sports, serious leisure, and enhancing quality of life. Currently, she is exploring the intersection of AI-based programs and well-being among older adults within senior centers.
 

AI2 members presented the 3rd year research work at the AI ALOE retreat on March 7 - 8.

AI2 members presented the 3rd year research work at the AI ALOE retreat on March 7 - 8.

March 18, 2024

Dr. Min Kyu Kim, accompanied by two AI2 graduate associates, Jinho Kim and Yoojin Bae, participated in and delivered three presentations during the AI ALOE retreat at the CODA building at Georgia Tech on March 7th and 8th. This retreat marked the third year of ALOE activities, where researchers from multiple institutions shared their Year 3 studies, with a focus on theory, experimentation, and data analysis.

1. Theoretical Framework for the SMART Project

During the morning session of the first day, on March 7th, Dr. Kim proposed a hierarchical model of theories from a design research perspective to explain the theoretical framework for the SMART project. Specifically, he leveraged theories of mental models and self-determination theory as foundational principles supporting cognitive and motivational analysis in SMART design experiments. Additionally, he demonstrated how design guidelines derived from the general principles of the Community of Inquiry (COI) model and the Interactive-Constructive-Active-Passive (ICAP) framework were applied to SMART development.

2. SMART Experiments 

In the afternoon session on March 7th, Yoojin presented the SMART experiments conducted during Fall 23 and Spring 24 at English and Biology classes within the Technological College System of Georgia (TCSG) colleges. To assess different user experiences and levels of adaptation to the AI technology SMART, we conducted A/B experiments in the format of a quasi-experiment and randomized controlled trial, respectively. Yoojin demonstrated how AI2 researchers collaborated with instructors to design and implement SMART-integrated teaching and learning. Additionally, she shared her reflections on successful outcomes and failures for future consideration in AI-related experiment designs, including automated data collection at scale.

3. Summative Evaluation of the Three-Year of the SMART Deployment    

On March 8th, Jinho presented our data analysis findings from the three-year deployment of SMART technology in two TCSG courses, English and Biology. As the AI ALOE team is currently in the midst of the project's third year, evaluating the effectiveness of SMART deployment for learner performance is crucial. To achieve this, we defined learning at both the micro and meso levels. Micro-level learning pertains to specific learning assignments within a unit, with summarization tasks on SMART considered as examples in this project. Meso-level learning, on the other hand, occurs across assignments throughout a course over a semester. Our assumptions were twofold: (a) learners' efforts in revising with SMART improve their conceptual understanding of the course materials (micro-level learning), and (b) this improved conceptual understanding influences higher performance in subsequent learning activities. To test these assumptions, linear mixed effects models were deployed, revealing a positive and significant impact across the courses.