The Learning Assistant

A generative AI assistant for teachers.


Project Overview


Role Type Timeline

UX/UI Designer Responsive Site       80 Hours

Introduction

The problem is that chat based generative AI apps are only in the first stages of their evolution toward full fledged personal assistants, and so are accessible only through text-based chat interfaces.  Organizing learning activities involves designing naturalistic interactions that start from the learner’s current state of understanding, and AI tools can help teachers do that.  

AI can help  teachers quickly assemble lesson plans, activities, assessments and so on. Teachers can also use AI to visualize the next steps in a learning journey; and all this helps in creating pedagogically sound content that fosters the persistence necessary to acquire knowledge outside the class. 

Consequently, the purpose of this project is to create a UI for a generative assistant that a teacher can quickly acquire a working knowledge of and use, collaboratively with other teachers, to generate engaging and effective content for their classrooms, and to deliver that within the envelope of a responsive website. 

Research

To investigate this problem, I started with this statement:

We want to know what users value in online and facilitated learning interfaces so that we can design an LLM centered site design that helps carry out planned learning activities.

The modes of research that were employed to investigate this problem were user research and comparative/competitive research. The idea was to figure out what people were doing with LLMs, i.e. how those in the education space were using them presently.

The tools I used were competitive analysis and a user survey. The former is below, albeit relatively sparse given that the availability of public data on this topic is still nascent. 

M3U3__In-Depth_Competitive_Analysis - Responsive LLM tutor.xlsx

I also found an academic reference that claimed LLMs were being used to automate the following categories of education tasks:

Detecting error (semantic analysis), assessment, classification, teaching support as intelligent agents, prediction (learning analytics, student performance prediction), constructing knowledge graphs, generating feedback and question content, and for general research (recommending resources for a topic, for example).  (Yan,.L et al (2023) at https://arxiv.org/pdf/2303.13379.pdf)

User research

User research is a key aspect of UI/UX work. Effective user research can help the designer ground his/her project in an understanding of the needs, behaviors, and motivations of their target audience. 

User interviews

I conducted user interviews of a total of five individuals, all practitioners within the education sector.  Many had not used generative AI applications in their workflow before, and were excited about the implications of this new technology.  Much of the discussion centered around effective use of Generative AI, about formulating an effective prompt, and about using AI in new and surprising ways to help across a wide spectrum of tasks, from acting as teaching support in class, to helping with rescaling reading worksheets and possibly even assisting with evaluating work against a rubric [at K-8 or lower level].  An affinity map that encapsulates some of the insights from those interviews is below.   

Affinity Maps - LA user interviews.pdf

Survey

User data was also collected through a survey for this project. It was provided to a group of teachers, gathering 17 unique responses in total, and focused on the implications for their work that generative AI might have and the expectations they had from generative AI. 

Nearly 3/4ths of this sample of 17 had never used generative AI before, but about two thirds had noticed and been curious about its potential.  Most teachers used the MS Office suite to create artifacts for their class, and combined that with research tools like Google search and YouTube. When asked how ChatGPT etc. may help, they expected, generally, that it would help with researching content for class, save them reading time by generating summaries and takeaways for content, and perhaps help in generating mind maps or similar study tools.  

Define

User goals

An affinity map was useful in synthesizing findings from the user survey.  Drawing from the themes in the map, I located six key features from the user data.

Through this process, I located the following user goals:

Support for common education tasks.

Support for common research tasks.

Support for collaborative co-creation.


Affinity Map - Learning Assistant 1.pdf

Sitemap

Defining these key features within the context of the site's content hierarchy was the next challenge.  A first pass is to the left; although this would not quite survive into prototype iterations. In general, this sitemap worked in that it defined a set of high level features and functions that the responsive site could accomodate. 


sitemap - learning assistant 1.pdf

User Flows

Based on the user data and general research I had collected previously, I created several possible user flows to base a prototype on.

These included (a) generating a lesson plan as an example of a common task (b) Accessing a collab space within the site. (c) Generating summaries within a collaboration context (d) uploading or transferring data to a collaborative session and (e) Viewing one’s interactions, i.e. chat, activity etc. on the platform.

userflows - learning assistant 1.pdf

Feature matrix

Deciding which features, gathered through the research and analysis phase, deserved to make the cut into the design segment of this project wasn't entirely clear cut.  I created a feature priority matrix to rank possible features, and it had the added benefit of indicating which features might be in next in line for implementation in a second, third, or later iteration of the prototype.  

Feature matrix (LA).pdf

Design

I created a style tile first. Since this site was all new, it was important that it have a clean, fresh, and dynamic look to it, and this began with defining the site's visual themes and typography. 

Wireframes

This project went through at least 3 sets of wireframes at low, mid and higher levels of fidelity.  At low fidelity, I was trying to figure out what a generative AI system that could support the tasks from earlier ought to do differently, i.e. what sorts of interactions did it need to support, and how ought it offer support or prompting to the user, among other questions.


In developing mid fidelity frames (WIP screenshot to the right), distinct feature sets began to emerge.  First, the concept of an activity space [window]: a content area that was characterized by support for recombination and support for multimodal content generation, whilst retaining structural coherence by dynamically reflowing content. 

Content could be generated in the activity space and it would be the area where content controls could reside as well.

Finally, another key feature was the separation of the app into a collaborative mode with a rich history view, and a single user content generation mode.


Color, theming, and branding elements were made more prominent in the final pass 

The responsive mobile layout is viewable through the dropdown.

Click for mobile layout

Prototype

The final interactive model pre-usability testing was based on the user stories discussed earlier. 

Test + Iterate

Usability research

12 participants responded to an unmoderated, survey based test of the prototype.  Metrics were learnability, memorability, recoverability and satisfaction.  Overall, about 3/4ths of users found the app demo memorable and satisfying. 


In more detail, the survey asked participants to respond to questions related to using a learning assistant app based on the prototype that was shared. The user flows centered around generating lesson plans, initiating a collaborative session and several question directed at key metrics like learnability, recoverability, and satisfaction.  Users reported learnability as being average (7/12) to highly intuitive (4/12).  Users generally did not encounter errors, and most found the interface aesthetically pleasing and functional (8/12).

Some users had issues with font sizing, especially on smaller devices (mobile).  In subsequent iterations of this app, I would strengthen accessibility controls, in particular by adding font resizing and contrast controls.  

lausabilityreport.pdf

Priority Iterations

Drawing from insights in the usability survey, I updated the feature priority matrix with possible additional features, ranked by how easy or difficult they might be to implement.  The resulting feature grid, ranked on contrasting dimensions of feasibility and priority (based in part on user preferences expressed in prior user interviews), is below.  

Prioritization matrix (LA) 1.pdf

Conclusion

Overall, this was a really interesting exercise. It’s main appeal, for me, was in trying to understand and go through the process I’d acquired earlier in this course, and to apply that to designing for a new technology, and to see what the result of that was, in terms of UI design, but also, more broadly, in linking that to conceptions of product design and engineering for a specific domain. 

There were significant challenges in understanding what interactions to design or optimize for, since my user base didn’t really have a deep knowledge of the affordances of this technology beforehand, and so these surveys were educative for them too, as a consequence.


For priority revisions, I would add in tooltips to create a quasi walkthrough for new users.  I’d also make the affordances of the activity space more obvious through animations etc.  I would also add in presets for common tasks that users could structure their queries around; for example, creating preset worksheets for different levels that users could use the generative AI to customize.