"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"

Welcoming Seora Kim as a graduate research associate

Welcoming Seora Kim as a graduate research associate 🔗

August 26, 2024

We are happy to welcome Seora Kim as a new graduate research associate! 

Seora Kim is a Graduate Research Associate and a Ph.D. student at the Department of Learning Sciences in the College of Education and Human Development, Georgia State University (GSU). She earned her Bachelor of Arts (B.A.) in Education and English language and literature, and Master of Arts (M.A.) in Education from Yonsei University, South Korea. Her research interests include educational technology, artificial intelligence-assisted learning to enhance critical thinking and creativity.

LT PhD Welcome (Back) Party

LT PhD Welcome (Back) Party 🔗

August 24, 2024

Happy New Academic Year 2024-2025!

We kicked off the new semester with our Learning Technology Welcome (Back) Party on August 24th!
The event was held at Wood's Chapel BBQ, where faculty members, graduate students, and their families gathered for a fun-filled afternoon of games, greetings, and camaraderie. With around 20 attendees, it was a wonderful opportunity to catch up with old friends and welcome new faces. We all had a fantastic time reconnecting and starting the academic year on a positive note! 

Kicking Off Happickle Fridays

Kicking Off Happickle Fridays 🔗

August 23, 2024

We kicked off our weekly Pickleball activity, "Happickle," on August 23rd at the Student Recreation Center! Our lab members gathered for a fun session of Pickleball, and it was a fantastic experience. Our visiting scholar and great coach, Sua, taught us step-by-step—from how to grip the pickleball paddle, to the basics of hitting the ball, and finally, to playing doubles games. Even the more complex rules were explained clearly and easily. A big thank you to Sua for her excellent coaching!

We’re also excited to unveil our new logo, created by our talented graduate associate, Seora :) 

We plan to meet every Friday throughout the semester, so come join us for some fun and games!
Find out more on our Instagram page: @happickle_gsu!

Lia's recent achievements. Congratulations!!

Lia's recent achievements. Congratulations!! 🔗

August 15, 2024

Congratulations on Lia's (our graduate associate) recent achievements. Lia has earned many accolades this time, including a manuscript publication, an accepted conference presentation, and appointment as the Africa and Middle East Regional Representative for the International Learning Sciences Student Association (ILSSA) at ISLS/ICLS. Please refer to the details below.

Construction and Validation of a Computerized Formative Assessment Literacy (CFAL) Questionnaire for Language Teachers: An Exploratory Sequential Mixed-methods Investigation

Lia Haddadian has recently published a study to develop and validate a Computerized Formative Assessment Literacy (CFAL) questionnaire targeted toward language teachers. Recognizing the need for valid and reliable instruments to measure teachers’ literacy in Computerized Formative Assessment (CFA), the study adopted an exploratory sequential mixed-methods design drawing on a dual deductive-inductive approach. A total of 489 English as a Foreign Language (EFL) teachers participated, ranging from elementary to advanced levels, in eight major cities across Iran. Through Exploratory Factor Analysis (EFA), this research identified six key factors, including practical, theoretical, socio-affective, critical, identity-related, and developmental. The research findings offer valuable insights and present significant implications for the development of professional programs aimed at assessing and enhancing teachers’ CFAL. These programs are essential for ensuring that teachers are well-equipped to leverage CFA effectively, ultimately improving both teaching practices and student learning outcomes.

Keywords: assessment, assessment literacy, formative assessment, formative assessment literacy, computerized assessment, computerized assessment literacy, literacy instruments, assessment literacy instruments 

Haddadian, G., Radmanesh, S., & Haddadian, N. (2024). Construction and validation of a Computerized Formative Assessment Literacy (CFAL) questionnaire for language teachers: An exploratory sequential mixed-methods investigation. Language Testing in Asia, 14(33). https://doi.org/10.1186/s40468-024-00303-2

 

The Impact of AI-Enabled Personalized Recommendations on L2 Learners' Engagement, Motivation, and Learning Outcomes

Lia Haddadian will be presenting at the Teachers College, Columbia University, during the 2024 Artificial Intelligence Research in Applied Linguists (AIRiAL) Conference. The paper reports on two studies that investigated the effects of AI-enabled personalized recommendations on the engagement, motivation, and learning outcomes of L2 learners. Study 1 involved 50 intermediate students (17 males and 33 females) learning English as a second language, while study 2 involved another cohort of 50 participants (27 males and 23 females). Quasi-experimental design with repeated measures was conducted over six weeks. The experimental group received feedback from ChatGPT (GPT-4), while the control group received feedback from human tutors. In study 2, participants received feedback from both ChatGPT and their tutors. The findings indicated no statistically significant differences in learning outcomes between the experimental and control groups. Also, learners preferred AI-generated and human-generated feedback almost equally, with each type offering distinct benefits. There were no statistically significant differences in learning outcomes between the experimental and control groups, suggesting that AI-generated feedback for second language learning can be just as useful as feedback created by humans. Findings suggest that AI-generated feedback may be integrated into L2 learning assessment without negatively compromising learning outcomes, and that it can supplement human input to improve the learning experience.

Keywords: AI-enabled personalized recommendations, L2 learners, engagement, motivation, learning outcomes, generative AI tools.

Daneshvar, B., Haddadian, G. (Accepted). The Impact of AI-Enabled Personalized Recommendations on L2 Learners’ Engagement, Motivation, and Learning Outcomes. In 2nd Annual Artificial Intelligence Research in Applied Linguists (AIRiAL-2024) Conference, Teachers College, Columbia University: NY. 

 

Appointment

Lia Haddadian has been appointed as the Africa and Middle East Regional Representative for the International Learning Sciences Student Association (ILSSA) at ISLS/ICLS. Congratulations to Lia Haddadian on this appointment!  
 

 

A manuscript accepted for publication in Assessing Writing

A manuscript accepted for publication in Assessing Writing 🔗

July 30, 2024

The manuscript Dr. Kim co-authored with two graduate research associates, Jinho Kim and Ali Heidari, was accepted for publication in Asessing Writing.

Kim, M. K., Kim, J., & Heidari, A. (2024). Exploring the multi-dimensional human mind: Model-based and text-based approaches. Assessing Writing, 61, 100878. https://doi.org/10.1016/j.asw.2024.100878 

Abstract:

In this study, we conceptualize two approaches, model-based and text-based, grounded on mental models and discourse comprehension theories, to computerized summary analysis. We juxtapose the model-based approach with the text-based approach to explore shared knowledge dimensions and associated measures from both approaches and use them to examine changes in students' summaries over time. We used 108 cases in which we computed model-based and text-based measures for two versions of students' summaries (i.e., initial and final revisions), resulting in a total of 216 observations. We used correlations, Principal Components Analysis (PCA), and Linear Mixed-Effects models. This exploratory investigation suggested a shortlist of text-based measures, and the findings of the PCA demonstrated that both model-based and text-based measures explained the three-dimensional model (i.e., surface, structure, and semantic). Overall, model-based measures were better for tracking changes in the surface dimension, while text-based measures were descriptive of the structure dimension. Both approaches worked well for the semantic dimension. The tested text-based measures can serve as a cross-reference to evaluate students' summaries along with the model-based measures. The current study shows the potential of using multidimensional measures to provide formative feedback on students' knowledge structure and writing styles along the three dimensions.      

 

 

 

Dr. Min Kyu Kim chaired the Learning@Scale 2024 Conference

Dr. Min Kyu Kim chaired the Learning@Scale 2024 Conference 🔗

July 25, 2024

Our director, Dr. Min Kyu Kim, served as a program chair for the Learning@Scale 2024 Conference held at the Georgia Institute of Technology from July 17 to 19.

Learning@Scale 2024 was themed “Scaling Learning in the Age of AI.” The conference focused on the potential of generative AI to advance pedagogical practices and the efficacy of learning at scale. Learning@Scale 2024 was co-located with the Educational Data Mining 2024 conference. If you are interested in the details, please visit the official website: https://learningatscale.hosting.acm.org/las2024/ 

las chairs

The wildly successful conference was organized by:

  • Program Co-Chair: Min Kyu Kim, Georgia State University
  • Program Co-Chair: Xu Wang, University of Michigan
  • Program Co-Chair: Meng Xia, Texas A&M University
  • General Chair: David Joyner, Georgia Tech

The proceedings are already available and downloadable via the link (https://dl.acm.org/doi/proceedings/10.1145/3657604?tocHeading=heading6).   

cover

Next year’s Learning@Scale will be held in Palermo, Sicily, Italy. See you all in Palermo!

Welcoming our visiting scholar: Sumin Hong!

Welcoming our visiting scholar: Sumin Hong! 🔗

June 24, 2024

We are excited to welcome Sumin Hong to our lab as a visiting scholar!

Sumin Hong has joined our lab as a visiting scholar during the summer, 2024. She is currently a PhD candidate at Seoul National University, South Korea. Her research interest is technology integrated instructional design including Artificial intelligence, Virtual reality, virtual world and collaborative learning tool for meaningful learning. During her visit, she is exploring AI integrated education and immersive learning for adult learning.

Presentations at AI-ALOE's 2024 Annual Review Meeting

Presentations at AI-ALOE's 2024 Annual Review Meeting 🔗

June 21, 2024

Director Dr. Min Kyu Kim and our graduate associate Jinho Kim attended and presented at AI-ALOE's 2024 Annual Review Meeting on June 21, 2024. The AI-ALOE team, comprising scholars, researchers, scientists, and student researchers, shared talks about our progress with the NSF evaluation team.

At the meeting, Jinho presented on Fostering Understanding and Knowledge Acquisition and Dr. Kim chaired and presented a Panel on Personalization.

Annual Meeting

Fostering Understanding and Knowledge Acquisition

As part of the Fostering Understanding part of the Core Research: Performance Measurement and Evaluation session, Jinho introduced our focus for year 3 research in terms of assessing the real impact of SMART on adult learning and online education through a combined summative and midterm evaluation approach, as well as employing a longitudinal approach to examine the impact of SMART on learners' ability to transfer learning to subsequent course tasks. Along with the issue hypothesis tree and design strategies of SMART, we shared data analysis and results from three years of SMART deployment, as well as our future steps.

Annual Meeting

Panel on Personalization

In the afternoon, Dr. Kim chaired the Panel on Personalization, introducing our efforts to conceptualize personalization and build a design framework for personalized learning with AI. He first shared ALOE's strategy in developing a multi-dimensional design guideline for personalized learning. Dr Kim also showcased what we have done to provide personalized learning for concept learning through SMART. Dr. Kim discussed where SMART lies on the theory-laden design dimensions for personalization, and introduced personalization strategies, feedback on SMART, along with experiments and results.

Annual Meeting

For more information about the Annual Review Meeting, please visit the following links: 

AI2RL at the 2024 ISLS Annual Meeting

AI2RL at the 2024 ISLS Annual Meeting 🔗

June 14, 2024

Our AI2RL members attended and presented three short papers and two posters at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, which took place from June 10th to 14th.

ISLS2024

A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Tuesday, June 11th, 2:30 to 3:30 PM, Jacobs 1225 B - AI and Tech-Enhanced Learning Environments 

Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.

ISLS Presentation

Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education
Tuesday, June 11th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters

Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.

A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education
Thursday, June 13th, 10:45 to 11:45 AM, Jacobs 2134 - Learning Feedback and Assessment 

Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.

How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches
Thursday, June 13th, 1:15 to 2:15 PM, Jacobs 2220 B - Large Language Models and Learning 

Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.

Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning
Thursday, June 13th, 4:00 to 5:30 PM, Jacobs 2nd Floor Atrium - Posters 

Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.

Introducing AI-ALOE and Demonstrating AI-ALOE's Technologies at the ISLS 2024 Workshop

Introducing AI-ALOE and Demonstrating AI-ALOE's Technologies at the ISLS 2024 Workshop 🔗

June 9, 2024

Dr. Min Kyu Kim and our graduate associate, Jinho Kim, attended the ISLS 2024 full-day workshop, conducted jointly by five national AI institutes: AI-ALOE, EngageAI, iSAT, AI4ExceptionalEd, and INVITE. They represented AI-ALOE, introduced the institute, and demonstrated AI-ALOE's technologies, including SMART.

In the morning, Dr. Kim presented an overview of AI-ALOE, covering the institute's interests, organization, testbeds, AI technologies with their deployment results, data architecture, visualization, and next steps. After a short break, there was a demo session of the workshop, where Dr. Kim and Jinho showcased the many AI-ALOE technologies, including SMART with detailed, working demonstrations.

ISLS WS

AI Augmented Learning for All: Challenges and Opportunities – a view from the Five National AI Institutes

As Artificial Intelligence (AI) becomes increasingly powerful, it is imperative for the general public to learn more about AI and how it can be utilized to address the society’s daily challenges. The National AI Institutes represent a cornerstone of the U.S. government’s commitment to fostering long-term fundamental research in AI. This workshop will introduce the National AI Institutes program to the Learning Sciences community, and, in particular, will focus on five of such AI Institutes related to the learnings science community, i.e., the National AI Institute for Adult Learning and Online Education (AI-ALOE), the National AI Institute for Engaged Learning (EngageAI), the National AI Institute for Student-AI Teaming (iSAT), the National AI Institute for Exceptional Education (AI4ExceptionalEd), and the National AI Institute for Inclusive Intelligent Technologies for Education (INVITE). The objectives are to introduce to the learning sciences community about the various education and learning related use cases being addressed by these AI Institutes, their AI research activities, the current status of AI advancement and limitations, and more importantly, how the learning sciences community can engage with these AI Institutes to shape their research programs to more strongly align with ongoing and emerging research in the field. Key research leaders from the AI Institutes will be invited to speak at the workshop along with other key players.