Theme and goals
Text-to-image generation technologies such as Stable Diffusion, DALL-E and Midjourney have become extremely popular in recent months garnering interest from people even outside of the AI community, including educators and k12 students. These powerful tools are able to generate high quality visuals from natural language prompts and are open to access for anyone. These tools can have infinite creative potential when used by k12 learners and educators but are also accompanied by serious ethical implications. However, currently educators and their students don’t necessarily have a good understanding of how these tools work or how they can be possibly used or misused. In this tutorial, we will demystify text-to-image generative tools for k12 educators as well as learning science researchers, and work along with educators to design teaching lessons and curricula around bringing these tools to the classroom. The goal of the workshop is for educators and k12 learning researchers to gain a clear understanding of how these generative tools work, and co-designing with them learning tools, lessons or curricula to teach k12 students about them.
Background
Text-to-image generation platforms are breaking new ground in AI tools that empower beginners by letting anyone easily create images with professional quality appearance. These platforms use massive datasets comprising labeled images scraped from art blogs, museum websites, fanfiction sites, etc. The datasets are used to train chains of neural networks and large language models, which learn to generate novel images that can mimic human gestures, tones, faces, and voices with uncanny accuracy. AI Literacy - the ability to understand and work with these generative AI tools - is of multi-disciplinary interest and has the potential to open doors to a large number of future careers.
From an educational perspective, text-to-image generation offers students a double edged sword: powerful new tools for self-expression, infused with the societal biases baked into the training data. How to help students (and their teachers) demystify these tools and develop their AI Literacy along with an understanding of the ethical and socio-technical implications of bias in AI? This is a pivotal question for educators and researchers in the learning sciences as generative AI platforms, such as DALL-E and ChatGPT, have recently proven themselves able to disrupt traditional processes with which we, in the learning sciences, measure learning outcomes. How will students express their thinking or refine their craftsmanship if an AI can generate their work for them? This tutorial is designed to engage educators and learning scientists in possible answers to these questions by sharing insights gleaned from a. recent research on AI Literacy, specifically generative AI, in middle and high school settings, b. current research on professional development models for building teachers’ AI Literacy, and c. discussion and projects created during a pilot seminar on text-to-image generation run by researchers at MIT’s Responsible AI for Social Empowerment and Education (RAISE) initiative https://image-gen.github.io/. The tutorial will culminate in a project-based activity drawing from constructionism and computational action to expose participants to culturally-sustaining, hands-on methods of teaching about text-to-image generative platforms, and empower learners to critically and creatively engage text-to-image platforms as tools for communication and critical engagement with media.
Outline of planned activities
Introduction: Exploring generative AI models and their relevance in K-12 AI Literacy
- Session 1 Slides
- What are language inference and visual generative AI models and how do they work.
- Examples of generative media.
- Motivation for teaching this to K-12 students: New media and possibilities for creation and computational creativity, as well as their ethical considerations.
Experimenting with the latest generative AI platforms
- Session 2 Slides
- Survey and experiment with some existing platforms: DALL-E 2, Stable Diffusion, Midjourney, & NightCafe and complete two activities using them: Creative AI Storytelling and Self-portraits: Description and examples of these two activities taken from our pilot seminar are linked here.
- Reflect on your experience of creating images for the activities above - what role did the AI tool play, how this technology can be used or misused, and how is it relevant to k12 students.
Understanding how the technology behind text-to-image generation tools works
- How do generative algorithms like stable diffusion work?
- How do natural language models fit into this?
K-12 generative AI literacy
- Session 4 Slides
- Case studies: Exploring existing Creative AI curriculum. Discussing learning goals and pedagogical methods used in previous work.
Design and share proposals for creative AI literacy including target age group, learning goals, tools used, learning activities & assessment.
- Deliverables will include curricula, learning lessons, interactive tools or assessment methods.
Expxected outcomes and contributions
Educators and researchers will gain an understanding of how text-to-image generation tools work and what their societal and ethical implications are. They will brainstorm techniques to bring these tools to the classrooms and teach students about the creative potential and responsible use of generative AI tools.
Resources and links
- (Recommended Read!!) Excellent article from MIT Technology Review summarizing the advances in this field: Generative AI is changing everything. But what’s left when the hype is gone?
- How do Diffusion models work: Link.
- Greg Rutkowski: This artist is dominating AI-generated art. And he’s not happy about it. Visit here.
- Diffusion models explained: Video.
- Illustrated Stable Diffusion (a bit more technical): Link.
- Diffusion Models: A Practical Guide: Link.
- CLIP (connecting text and images): Link.
- Paper: High Resolution Image Synthesis with Latent Diffusion Models (more technical but a great resource): Link.
- If you want to play with idioms extension that Parker demo’d, follow the Adding Extensions instructions here.
More to come soon!