I wonder if I put a few words here?

Category: Assignment 1 – Midterm Review – Module 1&2 Blog Posts and Comments

Module 2 : Analyzing DALL-E 2 with the SAMR Model

INTRODUCTION

In this blog, I’m going to explore DALL-E 2, an AI tool that creates images from text prompts, and see if it could be useful in an educational setting. I use ChatGPT every day to help debug my code, but I’ve never tried DALL-E before, so this is going to be a new experience for me. I’ll also be using ChatGPT to guide my analysis and applying the SAMR model to figure out if DALL-E 2 can actually enhance learning. Along with that, I’ll touch on some ethical concerns, like how AI might impact classrooms and what we should think about before bringing these tools into education.

Using DALL-E 2

I wanted to test DALL-E 2’s creativity, so I gave it the prompt, ‘Sheep playing baseball on a rainy day with friends.’ The AI-generated image captured the whimsical scene quite well. Then, I tried another prompt: ‘ Dog winning Olympic gold in 100m sprint’ The result was equally amusing, with a dog winning Olympic gold on the podium. These kinds of images could be used in a creative writing or art class to inspire students to develop stories based on visual prompts. For students who might not be comfortable with drawing or painting, DALL-E 2 provides an accessible way to express creativity and engage with visual content in a way that feels more approachable.

Fig. 1 DALL-E 2 generated this image based on the prompt ‘Sheep playing baseball on a rainy day with friends.’

Fig. 2 DALL-E 2 generated this image based on the prompt ‘ Dog winning Olympic gold in 100m sprint.’

SAMR Model Analysis (with ChatGPT’s Help)

To help me better understand how DALL-E 2 fits into the SAMR model, I asked ChatGPT for a breakdown. ChatGPT explained how the tool could be used at each stage, from replacing traditional art supplies to redefining how students can engage with creative content in the classroom. Here’s the breakdown it provided:

Fig. 3 ChatGPT helped outline DALL-E 2’s fit within the SAMR model, guiding my analysis of its use in education.

My Analysis

After going through ChatGPT’s breakdown, I’ve got my own thoughts on how DALL-E 2 fits into education:

  1. Substitution:
    DALL-E 2 here just replaces traditional art tools like pencils or paint. It’s great for students who can’t draw well, but it doesn’t really change how they’re learning but just the tool they use.
  2. Augmentation:
    This is where DALL-E 2 gets more useful. Students can quickly change prompts and experiment with different styles. It’s super cool because it speeds up creativity and gives them more flexibility.
  3. Modification:
    Here, DALL-E 2 starts to transform learning. Students can explore complex ideas or historical scenes without needing to be experts in drawing. It makes learning more interactive and visual.
  4. Redefinition:
    At this level, DALL-E 2 opens up possibilities that weren’t there before. Students can collaborate globally or create visuals for abstract concepts. This totally changes how students can engage with content in the classroom.

Ethical Considerations

I asked ChatGPT for a concise summary since I was getting too much unnecessary information at first. I just wanted the key points, so I typed “concise” to speed up the process, and it worked! Here’s what ChatGPT came back with, breaking down the major issues like bias, copyright, and privacy in a clear, straightforward way. It was pretty helpful, especially when I was looking to keep things simple.

Fig. 4 ChatGPT’s response explaining the ethical concerns of using DALL-E 2 in education.

Reflection

Using ChatGPT and DALL-E 2 for this assignment was actually pretty fun and eye-opening. Like, I didn’t expect DALL-E 2 to do such a good job with my random prompts like “sheep playing baseball on a rainy day.” It was super cool to see how it handled that. The way it generates images so quickly is something I can see working well in classrooms, especially in art or creative writing. It really speeds things up when you’re trying to come up with ideas.

One thing I noticed is that while DALL-E 2 makes things easier, there’s a risk of students relying too much on it and missing out on learning how to do things themselves. Like, it’s great for visuals, but what about developing actual drawing skills? Also, the ethical issues are pretty real like, I hadn’t really thought about how the AI might pull from copyrighted images or how bias in the dataset could sneak into the images. So yeah, while it’s an awesome tool, we have to be careful with how it’s used.

Conclusion

So, overall, I think DALL-E 2 is a super useful tool for learning, especially when it comes to making visual content. It’s really accessible for students who might not have the skills to draw or paint but still want to create something cool. It fits into the SAMR model pretty well, especially at the higher levels where it can redefine learning by allowing students to collaborate or create things they wouldn’t have been able to do by hand.

Looking ahead, it’s clear that tools like DALL-E 2 could play a big role in making education more interactive and creative.

Citations

“How does DALL-E 2 fit into the SAMR model for use in education?” prompt, ChatGPT, OpenAI, 12 Oct. 2024, chat.openai.com/.

“What ethical concerns are there when using AI tools like DALL-E 2 in education concise?” prompt, ChatGPT, OpenAI, 12 Oct. 2024, chat.openai.com/.

“Sheep playing baseball on a rainy day with friends” prompt, DALL-E, version 2, OpenAI, 12 Oct. 2024, labs.openai.com/.

“Dog winning Olympic gold in 100m sprint” prompt, DALL-E, version 2, OpenAI, 12 Oct. 2024, labs.openai.com/.

Module 1: The Life and Death of Stars: Black Holes

Hey everyone! So for this first blog post I have created a video about life cycle of Stars and how black holes form and look like. I used Screencastify to record my screen and used Capcut to edit it and incorporated animations, images, and scientific concepts to explain how black holes form. Most of animations came from sources such as Stargaze, Science Channel, Discovery, National Geographic.

  • Coherence Principle – Okay, so this one was about keeping things focused and avoiding unnecessary info. I had to remind myself a few times not to get carried away with cool facts that didn’t really help explain black holes. For example, instead of going off on a tangent about the entire life cycle of a star, I kept it simple by only explaining the parts that lead to black holes. This way, viewers don’t get overwhelmed with too much info. It also helped me cut down the length of my video, which is always a win!
  • Redundancy Principle – One big thing I learned from Mayer is that too much info at once is a bad thing. In my video, I tried not to put text on the screen while also narrating the same thing. Like, when I explained the singularity, I didn’t write it out on the screen; I just described it while showing the visual. This way, people can focus on what I’m saying without being distracted by unnecessary text.
  • Signalling PrincipleI made sure to use highlights to point out the key parts of the star lifecycle. For example, while talking about the different life stages of a star, I zoomed in and magnified each stage to really draw attention to it. When you’re looking at something like a lifecycle of star, it’s super easy to get lost in all the visuals. That’s why adding those visual cues, like magnifying, helped guide the viewers to the important parts. Honestly, this was one of the easier principles to apply, but it made a big difference in making everything clearer and more engaging.
  • Modality Principle – So, this principle basically says that people learn better when they listen to narration instead of reading a bunch of text. I kept this in mind by explaining the visuals with voiceovers rather than just putting up labels or text. Like when I explained the singularity, I used an animation to show how gravity becomes so intense near the center of a black hole that even light bends around it. I spoke over the animation instead of using text, so viewers could focus on the visual while listening to my explanation. This way, it was easier to understand without overwhelming them with too much information at once.
  • Segmenting Principle – I broke the task into three parts, I started off by explaining how stars work, then went into what happens when they die, and finally talked about how black holes form. It will give viewers brains enough time to process each concept properly, which is also exactly what Cognitive Load Theory is about.

I imagined the audience the class of EDCI 337, students from different programs and majority who might not be super familiar with black holes or life cycle of stars but are super curious about it.

As I applied Mayer’s principles, I realized how important it is to think about how people are going to absorb the information. I found the signaling and coherence principles pretty easy to implement, but redundancy was harder. I kept wanting to add extra text to explain things better, but I had to stop myself.

I found using screencastify and the video editor a tough task and I feel like I’ve got a better handle on how to design multimedia that actually helps people learn.