"Our Mission is to Build on Theories of Learning and Instruction to Create Innovative Learning Environments that Maximize Learner Capacity to Achieve Learning Goals"

Presentations at the AI ALOE Year 3 Executive Advisory Board Meeting

Presentations at the AI ALOE Year 3 Executive Advisory Board Meeting 🔗

May 10, 2024

Dr. Min Kyu Kim, two of our AI2 graduate associates, and our visiting scholar attended the AI-ALOE Year 3 Executive Advisory Board (EAB) Meeting on May 8th. About 30 AI-ALOE colleagues attended and shared their research at AI-ALOE. During the EAB meeting, Jinho presented on the topic of Fostering Deeper Understanding through Text Summarization while Dr. Kim participated in a Panel on Personalization with a presentation titled Design Dimensions for AI-Augmented Personalized Learning.

EAB_All EAB_AI2

 

Fostering Deeper Understanding through Text Summarization

During the Text Summarization segment of the Use-Inspired AI session, Jinho presented findings from a summative evaluation of SMART implementations. She shared the analysis from using linear mixed effects models, focusing on the relation between revisions in SMART and concept learning. Additionally, the relation between SMART's concept learning indices with the following cognitive tasks (i.e., subsequent writing activities in English classes and problem-solving tasks in associated exams) was presented. Furthermore, Jinho shared preliminary results on the engagement with SMART from the A/B experiments conducted in Fall 2023 and Spring 2024.
 

EAB Presentation

 

Design Dimensions for AI-Augmented Personalized Learning

During the afternoon Panel on Personalization, Dr. Kim proposed ten design dimensions that could be taken into consideration when focusing on AI-augmented personalized learning. He first outlined key theories underpinning personalized learning (such as CoI, ICAP, CoP, and Self-Determination) and shared the ten design dimensions, which can be categorized into four clusters: task characteristic-related, learning domain-related, motivation-related, and the AI role-related. As an illustration, he demonstrated how SMART fits into each of these dimensions. Additionally, Dr. Kim highlighted two further considerations - cognitive load and learners' feedback literacy. Following his presentation, Dr. Kim also took part in the panel discussion on personalization.

EAB Presenatation EAB Presentation

For more information about the EAB meeting, please visit the following links: 

Congratulations! Dr. Ali Heidari

Congratulations! Dr. Ali Heidari 🔗

May 6, 2024

Congratulations to our graduate associate, Ali Heidari, who received his Ph.D. degree on May 8, 2024! Ali has worked tirelessly under the guidance of Dr. Kim over the past five years. In a symbolic moment, his Ph.D. candidate title was replaced with the prestigious title of Dr. Heidari as Dr. Kim hooded him during the ceremony. Let's all join in celebrating this significant achievement and Ali's well-deserved success! Please check out his dissertation abstract following the nice photo below.

Ali Ph.D.

EXAMINING LEARNER’S EVALUATIVE JUDGMENT SUPPORTED BY TECHNOLOGY-ENABLED FEEDBACK INFORMATION

Abstract: 

Evaluative judgment is the capacity to discern and assess the quality of work using established criteria (Sadler, 1989), a critical skill for fostering self-regulation and continuous improvement in learning environments (Boud & Falchikov, 2006). This study investigates the effects of self-assessment versus peer assessment and technology versus non-technology settings on evaluation scores, evaluative judgment quality, and rating confidence of undergraduate college students. Utilizing a linear mixed-effects model, the research explores these impacts while accounting for individual participant differences (Gao et al., 2019; Panadero et al., 2016; Shore et al., 1992). The study indicated peer assessments consistently yielded higher evaluation scores across technological and non-technological contexts. However, no significant differences were observed in the quality of evaluative judgment between assessment types or settings, suggesting a more complex interplay of cognitive and affective processes than previously assumed (Sadler, 1998). Unexpectedly, peer assessment was associated with greater rating confidence, challenging the notion that self-assessment, particularly when augmented by technology, would enhance confidence levels (McCarthy, 2017; Panadero et al., 2016). These results underline the importance of peer interaction and the provision of clear evaluative criteria in enhancing evaluative practices. The study recommends integrating structured peer-assessment activities into educational curricula to promote critical feedback and reflective learning (Falchikov & Goldfinch, 2000; Hanrahan & Isaacs, 2001). The findings contribute to our understanding of assessment practices, emphasizing further research to explore the long-term development of evaluative judgment and the optimal integration of technology in assessment (Ecclestone, 2001; O’Donovan et al., 2004)

Lia Haddadian, our graduate associate, has achieved two significant milestones.

Lia Haddadian, our graduate associate, has achieved two significant milestones. 🔗

April 25, 2024

We are thrilled to announce that our graduate associate, Lia Haddadian, has achieved two significant milestones. During her three years of working at the lab, Lia has made outstanding contributions. Her paper was published in the reputable open-access journal, The Journal of Applied Instructional Design, and she was also awarded the prestigious "AACE Award" at the Society for Information Technology & Teacher Education (SITE) conference on March 25, 2024, in Las Vegas, Nevada. This award was selectively given to only five out of 411 papers presented, making it a remarkable achievement.

Congratulations to Lia on her well-deserved success!   Take a look at the details below.

1.    Golnoush Haddadian had a manuscript published in The Journal of Applied Instructional Design. Based on her interest in English language education, she leveraged the use of Grammarly feedback to enhance English language learner’s speaking skills. Her research suggests that feedback given by Grammarly greatly enhanced learners’ speaking abilities. Moreover, learners had positive perceptions towards Grammarly with signs of motivated use related to Grammarly feedback in their everyday lives. There was also an expansion of perception, and they gained significant experiential value from using Grammarly.

Haddadian, G. & Haddadian, N. (2024). Innovative Use of Grammarly Feedback for Improving EFL Learners’ Speaking: Learners’ Perceptions and Transformative Engagement Experiences in Focus. The Journal of Applied Instructional Design, 13(2). 

2.    Golnoush Haddadian presented her paper at the Society for Information Technology & Teacher Education (SITE) conference on Mar 25, 2024 in Las Vegas, Nevada. Her paper, titled “An Investigation of ELT Teachers’ Online Self-efficacy: Does Teachers’ Level of Agency Matter?” received the “AACE Award” which was selectively given to five of the 411 papers presented this year. 

Her research explored the online self-efficacy of sixty English language teachers in Iran based on their agency level. The findings suggested that teachers with high levels of agency significantly outperformed teachers with low levels of agency. The results identified five themes related to high and four themes related to low agency levels. These themes include “confidence, sense of control, willingness to take risks, positive attitude, and adaptability” for the former “lack of confidence, resistance to change, overwhelmed by technology, and the need for support” for the latter. These themes help to explain factors influencing teacher’s high and low online self-efficacy levels.

Haddadian, G. & Haddadian, N. (2024). An Investigation of ELT Teachers’ Online Self-efficacy: Does Teachers’ Level of Agency Matter?. In J. Cohen & G. Solano (Eds.), Proceedings of Society for Information Technology & Teacher Education International Conference (pp. 1607-1615). Las Vegas, Nevada, United States: Association for the Advancement of Computing in Education (AACE). Retrieved April 25, 2024 from https://www.learntechlib.org/primary/p/224179/.
 


 

 

Dr. Kim was invited as a guest speaker at the School of Nursing faculty meeting.

Dr. Kim was invited as a guest speaker at the School of Nursing faculty meeting. 🔗

April 15, 2024

Dr. Kim was honored to be invited as a guest speaker at the School of Nursing faculty meeting at Georgia State University (GSU) on April 15, 2024. His presentation, titled "AI-Supported Nursing Education: Potentials and Showcases," covered:

  • An introduction to the AI² Research Lab and its current AI-related projects

  • A SMART Technology Demonstration based on a user scenario

  • Showcases of SMART-applied classrooms

  • Research findings on the impacts of SMART technology

  • The potential of AI to generate NCLEX exams for practice

Dr. Kim's keynote highlighted the exciting possibilities of AI in nursing education and sparked important discussions among the faculty.

Five papers accepted to ISLS 2024 Annual Meeting

Five papers accepted to ISLS 2024 Annual Meeting 🔗

April 12, 2024

We're excited to announce that five of our papers have been accepted for presentation at the 2024 International Society of Learning Sciences (ISLS) Annual Meeting in Buffalo, New York, taking place from June 10th to 14th. These papers, stemming from our NSF AI ALOE and NSF SaTC projects, touch on our work related to AI-augmented concept learning, private AI curriculum in computer science education, AI-augmented summarization, and the evaluation of learner comprehension through AI techniques. We look forward to sharing our findings!

Bae, Y., Kim, J., Davis, A., & Kim, M. (accepted). A study on AI-augmented concept learning: Impact on learner perceptions and outcomes in STEM education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study explores the efficacy of AI-enhanced concept learning among adult learners, aiming to bolster their comprehension and facilitate the transition to embracing technology, refining metacognitive reading strategies, and improving subsequent knowledge test scores. Leveraging an AI-driven formative assessment feedback system, named SMART, AI integration was implemented in pre-class activities within a Biology course. Learners demonstrated enhanced mental models of STEM readings, and while the levels of technology acceptance were not statistically significant, we observed numerical increases in perceived AI usefulness. However, no significant relations were found with perceived ease of use and metacognitive awareness. The impact of concept learning through SMART on knowledge test scores demonstrated partial visibility. This research underscores the holistic integration of AI tools, highlighting the importance of educators to align instructional methods such as AI with learning objectives, content, assessment tests, and learners’ AI literacy levels, particularly within the domain of online STEM education.

Haddadian, G., Panzade, P., Takabi, D., & Kim, M. (accepted). Evaluating private artificial intelligence (AI) curriculum in computer science (CS) education: Insights for advancing student-centered CS learning. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study was undertaken to pilot a Private AI curriculum designed with a problem-centered instruction (PCI) approach for post-secondary Computer Science (CS) education. To this end, a condensed version of one of the ten curricular modules was implemented in a two-hour workshop. The mixed-method data analysis revealed participants' positive motivation and interest in the curriculum, while also pinpointing opportunities to further improve the design strategies of the curriculum.

Kim, J., Bae, Y., Stravelakis, J., & Kim, M. (accepted). Investigating the influence of AI-augmented summarization on concept learning, summarization skills, argumentative essays, and course outcomes in online adult education. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study aims to explore the influence of concept learning facilitated by an AI-augmented summarization feedback tool, the Student Mental Model Analyzer for Research and Teaching (SMART), on various learning outcomes within an undergraduate English course using linear mixed-effects (LME) modeling and Bayesian correlations with data from 22 participants. Significant improvements in learners’ mental models and associations of concept learning with subsequent learning activities suggest the potential of such tools in improving learning performance.

Kim, J., Lee, T., Bae, Y., & Kim, M. (accepted). A comparison between AI and human evaluation with a focus on generative AI. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences.  

Abstract: This study aims to explore the utility of generative AI in providing formative assessment and feedback. Using data from 43 learners in an instructional technology class, we assessed generative AI’s evaluative indices and feedback capabilities by comparing them to human-rated scores. To do this, this study employed Linear Mixed-Effects (LME) models, correlation analyses, and a case study methodology. Our findings suggest an effective generative AI model that generates reliable evaluation for detecting learners’ progress. Moderate correlations were found between generative AI-based evaluations and human-rated scores, and generative AI demonstrated potential in providing formative feedback by identifying strengths and gaps. These findings suggest the potential of utilizing generative AI to provide different insights as well as automate formative feedback that can offer learners detailed scaffolding for summary writing.

Kim, M., Kim, J., Bae Y., Morris, W., Holmes, L., & Crossley, S. (accepted). How AI evaluates learner comprehension: A comparison of knowledge-based and large language model (LLM)-based AI approaches. Proceedings of the 18th International Conference of the Learning Sciences/Computer-Supported Collaborative Learning (ICLS/CSCL-2024). Buffalo, NY: International Society of the Learning Sciences. 

Abstract: This study investigated two AI techniques for evaluating learners’ summaries and explored the relationship between them: the SMART knowledge-based AI tool, which generated multidimensional measures representing knowledge models derived from learner summaries, and a Large Language Model (LLM) fine-tuned for summary scoring. The LLM model incorporated both the summary and source texts in the input sequence to calculate two component scores related to text content and wording. Summary revisions from 172 undergraduates in English and Biology classes were analyzed. The results of linear mixed-effects models revealed that both AI techniques detected changes during revisions. Several SMART measures were positively associated with an increase in the LLM’s Content scores. These findings support the notion that the LLM model excels at broad and comprehensive assessment, while SMART measures are more effective in providing fine-grained feedback on specific dimensions of knowledge structures.

Dr. Min Kyu Kim presented about the theoretical underpinnings for the SMART project

Dr. Min Kyu Kim presented about the theoretical underpinnings for the SMART project 🔗

April 1, 2024

Dr. Kim Kyu Kim, our director, presented the theory-driven development and research for SMART at today's AI-ALOE Foundational and Use-Inspired AI Meeting.

Dr. Kim began by outlining the project's aim of aiding learners in understanding key concepts. He elaborated on how theories such as Personalization, Community of Inquiry (COI), and ICAP (Interactive, Constructive, Active, Passive) have influenced SMART's design. Additionally, Dr. Kim shared our research questions and data collection methods within the SMART project. He also raised the question of having shareable instruments among the ALOE research teams and proposed the development of shareable theoretical frameworks to facilitate collaboration and research involving multiple AI tools. Dr. Kim led discussions on external threats and opportunities relevant to our work within the AI-ALOE as well.

Welcoming our visiting scholar: Sua Im!

Welcoming our visiting scholar: Sua Im! 🔗

March 31, 2024

We are pleased to welcome Sua Im to our lab as a visiting scholar!

Sua Im has joined our lab as a visiting scholar through the Outstanding Graduate Students’ Overseas Training Program (BK21 FOUR Project) sponsored by The National Research Foundation of Korea. Sua holds bachelor's degrees in French language and literature and in sports industry, as well as a master's degree in sports industry, all from Yonsei University, South Korea. She is currently advancing her Ph.D. studies at Yonsei. Her research focuses on various aspects of sports, serious leisure, and enhancing quality of life. Currently, she is exploring the intersection of AI-based programs and well-being among older adults within senior centers.
 

AI2 members presented the 3rd year research work at the AI ALOE retreat on March 7 - 8.

AI2 members presented the 3rd year research work at the AI ALOE retreat on March 7 - 8. 🔗

March 18, 2024

Dr. Min Kyu Kim, accompanied by two AI2 graduate associates, Jinho Kim and Yoojin Bae, participated in and delivered three presentations during the AI ALOE retreat at the CODA building at Georgia Tech on March 7th and 8th. This retreat marked the third year of ALOE activities, where researchers from multiple institutions shared their Year 3 studies, with a focus on theory, experimentation, and data analysis.

1. Theoretical Framework for the SMART Project

During the morning session of the first day, on March 7th, Dr. Kim proposed a hierarchical model of theories from a design research perspective to explain the theoretical framework for the SMART project. Specifically, he leveraged theories of mental models and self-determination theory as foundational principles supporting cognitive and motivational analysis in SMART design experiments. Additionally, he demonstrated how design guidelines derived from the general principles of the Community of Inquiry (COI) model and the Interactive-Constructive-Active-Passive (ICAP) framework were applied to SMART development.

2. SMART Experiments 

In the afternoon session on March 7th, Yoojin presented the SMART experiments conducted during Fall 23 and Spring 24 at English and Biology classes within the Technological College System of Georgia (TCSG) colleges. To assess different user experiences and levels of adaptation to the AI technology SMART, we conducted A/B experiments in the format of a quasi-experiment and randomized controlled trial, respectively. Yoojin demonstrated how AI2 researchers collaborated with instructors to design and implement SMART-integrated teaching and learning. Additionally, she shared her reflections on successful outcomes and failures for future consideration in AI-related experiment designs, including automated data collection at scale.

3. Summative Evaluation of the Three-Year of the SMART Deployment    

On March 8th, Jinho presented our data analysis findings from the three-year deployment of SMART technology in two TCSG courses, English and Biology. As the AI ALOE team is currently in the midst of the project's third year, evaluating the effectiveness of SMART deployment for learner performance is crucial. To achieve this, we defined learning at both the micro and meso levels. Micro-level learning pertains to specific learning assignments within a unit, with summarization tasks on SMART considered as examples in this project. Meso-level learning, on the other hand, occurs across assignments throughout a course over a semester. Our assumptions were twofold: (a) learners' efforts in revising with SMART improve their conceptual understanding of the course materials (micro-level learning), and (b) this improved conceptual understanding influences higher performance in subsequent learning activities. To test these assumptions, linear mixed effects models were deployed, revealing a positive and significant impact across the courses.

AI2RL Spring Feast

AI2RL Spring Feast 🔗

March 12, 2024

We had our highly anticipated Spring Feast during the spring break, gathering members of our lab together for a time of fun and enjoyable gathering. On March 12th, the members of our lab, including our two visiting scholars and three graduate associates, gathered at Dr. Kim's place at lunchtime to enjoy a midday filled with great food and pleasant company. We had a wonderful time with the delicious food Dr. Kim prepared for us, as well as tasty desserts and a choice of tea or coffee. The Spring Feast provided us with a much-needed opportunity to relax and unwind, setting the stage for a memorable and enjoyable break.

Invited Talk: Dr. Min Kyu Kim presented our learning measures and approach to machine teaching

Invited Talk: Dr. Min Kyu Kim presented our learning measures and approach to machine teaching 🔗

February 26, 2024

Dr. Min Kyu Kim, our director, was invited to present at the NSF-ALOE Learning and Management Discussions on February 26th. His presentation was about the SMART project's learning measures and approach to machine teaching.

Dr. Kim discussed the various learning metrics utilized within the SMART project, starting from outlining the theory of change guiding our work, highlighting our research questions, and explaining the tools we use for data collection. Additionally, he addressed the successful strategies we've implemented, the challenges we face, and our efforts to improve data collection scalability. Dr. Kim also touched upon our team's approach to machine teaching, focusing on the use of generative AI for expert modeling. He briefly mentioned our upcoming pilot test plans and W.R.I.T.E. development as well.