Skip to the content.

ISLS 2023 Tutorial: Demystifying Text-to-Image generation for K12 educators

Registration required: https://forms.gle/j2sWtGnvqkNGXn6j6

Organizers

Safinah Ali, MIT Media Lab, Prerna Ravi, MIT CSAIL, Kate Moore, MIT STEP Lab, Hal Abelson, MIT CSAIL, Cynthia Breazeal, MIT Media Lab
Massachusetts Institute of Technology

Time and Dates

Sunday, June 11th from 8:30-5:00pm EST (Full Day Tutorial)

Location

TBA, Montreal, QC, Canada

TBA

https://forms.gle/j2sWtGnvqkNGXn6j6

Call for participation

There is currently a proliferation of digital platforms to perform text-to-image generation. These platforms are breaking new ground in AI tools that let anyone, even beginners, easily create images with professional quality appearance. Are you an educator or a K12 learning researcher interested in bringing these tools to your classrooms and encouraging responsible use of these technologies by young learners? This tutorial will explore these text-to- image generation platforms with an emphasis on opportunities in K-12 education. In this tutorial, participants will review recent technological development that has led to the rapid advancement of text-to-image generation, explore the components of text-to-image generation - including transformers, latent space, and diffusion - and discuss ethical and societal implications of this technology. Participants will prototype learning activities for a target K-12 age group (concepts for the professional development of teachers is also encouraged) including learning goals, age appropriate tool introduction, and assessment. The main goal of the tutorial will be to create curricula for text to image generation capability that’s aligned with our approach of constructionism and computational action.

Pre-requisites

There are no pre-requisites for the workshop, but it is especially relevant to educators and researchers interested in AI learning or art learning for K12 students. While this seminar will discuss the current advancements and tools in text-to-image generation, its central focus will be on using these tools to create curriculum for K-12 AI Education.

Intended audience:

K12 Educations, K12 CS & AI Education Researchers

Theme and goals:

Text-to-image generation technologies such as Stable Diffusion, DALL-E and Midjourney have become extremely popular in recent months garnering interest from people even outside of the AI community, including educators and k12 students. These powerful tools are able to generate high quality visuals from natural language prompts and are open to access for anyone. These tools can have infinite creative potential when used by k12 learners and educators but are also accompanied by serious ethical implications. However, currently educators and their students don’t necessarily have a good understanding of how these tools work or how they can be possibly used or misused. In this tutorial, we will demystify text-to-image generative tools for k12 educators as well as learning science researchers, and work along with educators to design teaching lessons and curricula around bringing these tools to the classroom. The goal of the workshop is for educators and k12 learning researchers to gain a clear understanding of how these generative tools work, and co-designing with them learning tools, lessons or curricula to teach k12 students about them.

Background:

Text-to-image generation platforms are breaking new ground in AI tools that empower beginners by letting anyone easily create images with professional quality appearance. These platforms use massive datasets comprising labeled images scraped from art blogs, museum websites, fanfiction sites, etc. The datasets are used to train chains of neural networks and large language models, which learn to generate novel images that can mimic human gestures, tones, faces, and voices with uncanny accuracy. AI Literacy - the ability to understand and work with these generative AI tools - is of multi-disciplinary interest and has the potential to open doors to a large number of future careers.

From an educational perspective, text-to-image generation offers students a double edged sword: powerful new tools for self-expression, infused with the societal biases baked into the training data. How to help students (and their teachers) demystify these tools and develop their AI Literacy along with an understanding of the ethical and socio-technical implications of bias in AI? This is a pivotal question for educators and researchers in the learning sciences as generative AI platforms, such as DALL-E and ChatGPT, have recently proven themselves able to disrupt traditional processes with which we, in the learning sciences, measure learning outcomes. How will students express their thinking or refine their craftsmanship if an AI can generate their work for them? This tutorial is designed to engage educators and learning scientists in possible answers to these questions by sharing insights gleaned from a) recent research on AI Literacy, specifically generative AI, in middle and high school settings, b) current research on professional development models for building teachers’ AI Literacy, and c) discussion and projects created during a pilot seminar on text-to-image generation run by researchers at MIT’s Responsible AI for Social Empowerment and Education (RAISE) initiative https://image-gen.github.io/. The tutorial will culminate in a project-based activity drawing from constructionism and computational action to expose participants to culturally-sustaining, hands-on methods of teaching about text-to-image generative platforms, and empower learners to critically and creatively engage text-to-image platforms as tools for communication and critical engagement with media.

Outline of planned activities:

Expected outcomes and contributions:

Educators and researchers will gain an understanding of how text-to-image generation tools work and what their societal and ethical implications are. They will brainstorm techniques to bring these tools to the classrooms and teach students about the creative potential and responsible use of generative AI tools.

1) Discord Invite Link for the Workshop: [TBA]
2) (Recommended Read!!) Excellent article from MIT Technology Review summarizing the advances in this field: Generative AI is changing everything. But what’s left when the hype is gone?
3) How do Diffusion models work: https://towardsdatascience.com/diffusion-models-made-easy-8414298ce4da
4) Greg Rutkowski: This artist is dominating AI-generated art. And he’s not happy about it. https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/
5) Diffusion models explained: https://www.youtube.com/watch?v=yTAMrHVG1ew
6) Illustrated Stable Diffusion (a bit more technical) https://jalammar.github.io/illustrated-stable-diffusion/
7) Diffusion Models: A Practical Guide: https://scale.com/guides/diffusion-models-guide
8) CLIP (connecting text and images): https://openai.com/blog/clip/
9) Paper: High Resolution Image Synthesis with Latent Diffusion Models (more technical but a great resource): https://arxiv.org/pdf/2112.10752.pdf
10) If you want to play with idioms extension that Parker demo’d, follow the Adding Extensions instructions here: https://en.scratch-wiki.info/wiki/Extension and look for DallE-dioms

More to come soon!

Contact

If you have any questions regarding the seminar, please email one of us: