"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"

Dr. Min Kyu Kim chaired the Learning@Scale 2024 Conference

Dr. Min Kyu Kim chaired the Learning@Scale 2024 Conference 🔗

July 25, 2024

Our director, Dr. Min Kyu Kim, served as a program chair for the Learning@Scale 2024 Conference held at the Georgia Institute of Technology from July 17 to 19.

Learning@Scale 2024 was themed “Scaling Learning in the Age of AI.” The conference focused on the potential of generative AI to advance pedagogical practices and the efficacy of learning at scale. Learning@Scale 2024 was co-located with the Educational Data Mining 2024 conference. If you are interested in the details, please visit the official website: https://learningatscale.hosting.acm.org/las2024/ 

las chairs

The wildly successful conference was organized by:

  • Program Co-Chair: Min Kyu Kim, Georgia State University
  • Program Co-Chair: Xu Wang, University of Michigan
  • Program Co-Chair: Meng Xia, Texas A&M University
  • General Chair: David Joyner, Georgia Tech

The proceedings are already available and downloadable via the link (https://dl.acm.org/doi/proceedings/10.1145/3657604?tocHeading=heading6).   

cover

Next year’s Learning@Scale will be held in Palermo, Sicily, Italy. See you all in Palermo!

Welcoming our visiting scholar: Sumin Hong!

Welcoming our visiting scholar: Sumin Hong! 🔗

June 24, 2024

We are excited to welcome Sumin Hong to our lab as a visiting scholar!

Sumin Hong has joined our lab as a visiting scholar during the summer, 2024. She is currently a PhD candidate at Seoul National University, South Korea. Her research interest is technology integrated instructional design including Artificial intelligence, Virtual reality, virtual world and collaborative learning tool for meaningful learning. During her visit, she is exploring AI integrated education and immersive learning for adult learning.

Presentations at AI-ALOE's 2024 Annual Review Meeting

Presentations at AI-ALOE's 2024 Annual Review Meeting 🔗

June 21, 2024

Director Dr. Min Kyu Kim and our graduate associate Jinho Kim attended and presented at AI-ALOE's 2024 Annual Review Meeting on June 21, 2024. The AI-ALOE team, comprising scholars, researchers, scientists, and student researchers, shared talks about our progress with the NSF evaluation team.

At the meeting, Jinho presented on Fostering Understanding and Knowledge Acquisition and Dr. Kim chaired and presented a Panel on Personalization.

Annual Meeting

Fostering Understanding and Knowledge Acquisition

As part of the Fostering Understanding part of the Core Research: Performance Measurement and Evaluation session, Jinho introduced our focus for year 3 research in terms of assessing the real impact of SMART on adult learning and online education through a combined summative and midterm evaluation approach, as well as employing a longitudinal approach to examine the impact of SMART on learners' ability to transfer learning to subsequent course tasks. Along with the issue hypothesis tree and design strategies of SMART, we shared data analysis and results from three years of SMART deployment, as well as our future steps.

Annual Meeting

Panel on Personalization

In the afternoon, Dr. Kim chaired the Panel on Personalization, introducing our efforts to conceptualize personalization and build a design framework for personalized learning with AI. He first shared ALOE's strategy in developing a multi-dimensional design guideline for personalized learning. Dr Kim also showcased what we have done to provide personalized learning for concept learning through SMART. Dr. Kim discussed where SMART lies on the theory-laden design dimensions for personalization, and introduced personalization strategies, feedback on SMART, along with experiments and results.

Annual Meeting

For more information about the Annual Review Meeting, please visit the following links: 

AI2RL at the 2024 ISLS Annual Meeting

AI2RL at the 2024 ISLS Annual Meeting 🔗

June 14, 2024

Our AI2RL members attended and presented three short papers and two posters at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, which took place from June 10th to 14th.

ISLS2024

A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Tuesday, June 11th, 2:30 to 3:30 PM, Jacobs 1225 B - AI and Tech-Enhanced Learning Environments 

Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.

ISLS Presentation

Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education
Tuesday, June 11th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters

Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.

A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Thursday, June 13th, 10:45 to 11:45 AM, Jacobs 2134 - Learning Feedback and Assessment 

Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.

How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches
Thursday, June 13th, 1:15 to 2:15 PM, Jacobs 2220 B - Large Language Models and Learning 

Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.

Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning
Thursday, June 13th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters 

Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.