Is AI ruining our education or can it be used as a vital tool for students to learn more effectively? I'm Joelle, this is Emma and this is Luciana. We are part of the Politische Bildungsprogramme at Linz International School Aarhof. Today we are hosting a debate exploring how AI will influence our education and society. We have collected a group of teachers and students to share their diverse opinions on the topic. The students will be given a series of statements. If they agree, they will step forward. Later, the students who disagree will step forward and explain their opinion. People rely on AI too much in society. Please step forward if you agree in three, two, one. In my personal experience, just by looking around at my peers, I've noticed that a lot of people use AI to an extent that I never imagined would be so common when it first started becoming normalized. I see people using AI interchangeably with Google for a lot of things that would be much more reliable through Google. When it comes to using AI for things like coming up with comprehension questions for a worksheet, then I understand the use because this is a worksheet you've already looked through and just want to have some more extra material out of. But if it comes to summarizing information and doing the work for you, I find people using AI way too often to do the work for them, which completely negates the whole purpose of schoolwork, as well as your own critical thinking. In IB, I find it's very, very important to reflect on your own learning, to make connections and to develop these skills to be able to do them independently. And AI takes all of it away because the whole point of AI, the way it is being used by my peers, is to skip that difficult process of thinking and connecting information to immediately get the result. So everyone is looking for the final answer and not learning how to get the answer on their own. And this happens too often, in my opinion. I agree with that completely. I have seen from the point of view of a teacher, we often, so I like to give research tasks and I explicitly make sure that I tell my students not to use AI because I want them to get used to looking at a variety of sources. Where does my information come from? And yes, of course you could use AI to tell me where the sources are. However, it skips the process, as you correctly said, of actually identifying what is a valuable source, what is a source that might be unreliable, what is a reliable source, and actually gaining the information that you need, basically, and retaining that information as well. And it is often used as a shortcut. For instance, for homework assignments, if your purpose is to learn a new text type, such as an essay or an article, of course, the easier, shorter way is to use AI to answer your question, to make sure that you give a perfect homework assignment, but that isn't the goal. The goal is for you to learn something. The goal is for you to actually improve your skills and get better at what you want to do. And so I agree that it's definitely a shortcut most of the times. Another thing to mention is that not only students are at fault of doing this, but also teachers sometimes find shortcuts in teaching methods by just letting AI spit out an example where there's countless of mistakes, where you don't get any real learning to it. And also by grading important work like essays, texts, IAs, a lot of teachers take a shortcut and just say, yeah, AI, what do you say about it? And another thing to mention is that AI doesn't force us to critically think because students are in general just very resistant to critically think however AI gives us only the impression and gulls us into believing that we are doing the thinking by just chewing chewing the text chewing the information for us and then reading it to us, and we think, oh, yeah, we know how it is, even though that isn't the case. I completely agree with Matteo and Aria. I know that anecdotal evidence is not the best evidence, but to support the claim that even teachers over rely on AI sometimes, it has happened to me personally that when I was in between grades, a teacher asked ChatGPT for an opinion on what grade to give me and then told me about it. And I was very upset in this context because it essentially declares that you do not care enough about your students as a teacher or about your learning as a student to actually develop your thinking in this area and regarding what Arya said I think the over reliance on AI could also be dangerous in the sense that I want to cite an example archive of our own if anybody doesn't know what it is it's like a website where people can upload fanfiction and read fan fiction of others. And it has become the main platform for doing this on the internet. And recently, AO3 broke down. And there was absolute havoc on the internet because people have become so over-reliant on this. And I think that this process of thinking that chatbots or that large language models are always something that you will be able to rely on is incredibly dangerous to not only critical thinking, but also your personal lives. I've also like as a student I don't really I don't use AI at all um but the thing is when it comes to friends or when it comes to other classmates the use of AI is really like shoved towards you like for example if you ask your friend or your classmate hey do you know what this means or can you like tell me what this means it's like oh just ask chat just ask chat gbt just ask your best friend but like it really like if I'm trying to ask you something and you just just say you don't know don't like toward like point me towards my end which is chat using chat or any other AI source. I think it just really like under like devalues everything that we are supposed to be learning in school. Yeah. You know the question is asking us to consider the volume of use and I can come up with any number of anecdotes as a teacher over the last few years where I'm seeing students producing work and having not gone through the process of learning, they're missing all the learning that goes around the end result. If you question the student afterwards, they don't know what their essay that's supposed to be built on their analysis of sorts, they don't know what it says, they don't know what it means that's supposed to be built on their analysis of sort, they don't know what it says, they don't know what it means, they can't define terms. They haven't learned to write, they haven't learned to structure an essay, they haven't learned to structure a paragraph, they haven't learned to structure an argument because they've asked a mechanical tool to do it for them. But in terms of volume, I as a teacher see students and colleagues relying on these artificial tools at a level that really surprises me. I'm surprised amongst my colleagues, but I'm really surprised with my students how much they rely on this. What really disappoints me as a teacher is how students rely on these and then tell that and then they're they're selling this as their own work. You know, they'll have an essay or parts of an essay written by a mechanical tool and then they'll claim it's their own work. And so as a teacher, I'm constantly confronted with students who are claiming this is their work when it isn't and that's what really upsets me. But like all of you, I'm really surprised with the volume of use And that's what really upsets me. But like all of you, I'm really surprised with the volume of use. It's really taken me by surprise. Everyone who has disagreed, please step forward and feel free to explain your opinion. Okay, so I want to start by saying that, of course, as a student myself, I have felt the temptation, you know, when you're tired in the evening and need to write about something that doesn't necessarily like fit within your interests or you feel like it's not right now in the trajectory of AI, it's going to be here for a long time. And I don't think it's going away. And I actually don't think that AI development is going to slow down. So, of course, that raises the question where the school system is generally going if we need to make fundamental changes. because especially the use of AI is making so many tasks, as you said, research tasks, basically so difficult to grade because you don't know where they're actually coming from. So I think the question is not about if we're using it too much, but we also need to learn when to use it and when to apply it. And I think that students need to learn that maybe when finding sources, it can be helpful. But when analyzing the sources, you need to learn that maybe when finding sources it can be helpful but when analyzing the sources you need to use yourself and maybe for us because we primarily grew up without AI it's still somewhat easier but for the students coming into first grades now they will not know a school system without the access of chat gpt or any other AI chatbots. So I think for them, it's actually more important to learn, basically, to resist the temptation. And I think as a school, you cannot just ignore AI and say, don't use it, because unfortunately, no matter what you do, students will use it. So I think the more important thing is to try to regulate and push for tasks that really require personal engagement and if that means more tasks in person in school then I think that's the best way to go I think you cannot just say stop using AI because most will will not follow that track I agree with Sophie I want to address two things. One, the question asked if people in society are using it too much. I want to kind of drift away from the educational just for one second. My best friend lives in what I like to call the middle of nowhere. And he has quite a big house. And he has a lot of solar panels in his house. And he has recently, his father is a programmer for quite a big company and they have together he's also interested in programming so they have together with an ai created a very cool home smart system and have optimized a lot of different things for example they have set up a sensor where they have a chicken coup and and the sensor will automatically recognize, with the help of AI, when to release food into the chicken coop and when to open and close the door for the chickens to go outside, which is something I don't think we should dial back on when it comes to that type of use of AI. And also, when it comes to school, when the calculators, for our math exam, we have a part where you get to use a calculator and we have to get a part where you don't get to use a calculator. If you study for the math exam, only doing every problem with a calculator, you will be pretty screwed when you come to the actual exam. And I think it's basically the same case with AI. You can use it to help study. I use it all the time to help me explain difficult mathematical concepts. If you don't learn to study without it, you will face the consequences anyway. And it just takes a couple of years of adaptation for us to learn that, which I think is normal. Okay. So about what Max said with integrating AI into the farm, I think that is an example of good use of gender driven AI. But I think that when I say AI is being depended on too much, this leads to a web of counter effects and chains of effects that lead us to having no choice but to face it. Even if you integrate some sort of regulation or moderation like Sophie suggested, it is impossible to find fully trusted sources as easily as before because now sources have found an increase of common AI language even in the most trusted sources in studies that cost hundreds of thousands, maybe millions to produce, that are considered to be incredibly relevant and important. Even if it's only 20%, which is usually it is higher AI usage, this means that AI is being used in almost every level of society. And if you integrate those moderations to students, then they will be using Google or other search engines to find their sources. But additionally, the Google AI will be putting traffic to summaries instead of the actual sources. And if those students still go beyond Google AI and click on those reliable sources, which they will have to learn how to identify, those sources will also have AI in them. And this is where I think the danger is because of how many AI hallucinations there are. Okay, so thank you. Now we're going to be moving on to the next question. Everyone, please take your places. I will now say the next statement. AI chatbot should be fully integrated into classrooms. Please step forward if you agree. One, two, three. So I have no one to talk to, but I will make the same argument I made with calculators. What you do is you adapt. I'm just going to give a random example. There are so-called million-dollar math questions. There's seven of them. They're so hard, the bet is if you solve them, you get a million dollars because they're so difficult and no one has ever solved them. Why not just put all of them into any AI and that's an easy seven million? No. So I think it's collectively agreed that there are some things that are outside of AI's capability. What is the same thing, what we're going to have to do is the same thing for calculators is we're just going, and AI is not just about math, it's for everything. We're just simply going to have to make the problems harder. We have more resources. What we often tend to forget is that we as humans created artificial intelligence. It's not a thing that just appeared. We created it. So I don't see the reason in limiting it because we made it too good. I think that sounds very dumb. What we have to do is just adapt. And so we are now able as a society to solve harder problems earlier with more resources. As an educator who's quite concerned with AI dumbing down my students, I have to agree that we integrate AI into all the classrooms. It's my job as a teacher to prepare students for life. I'm a history teacher and as a history teacher I teach writing. AI, as you said, isn't going away. It's my reality as a teacher. I have to integrate it and I have to teach my students how to manage it. That's my job. Will the people who disagree please step forward? So Max, regarding your calculator example and the example of the millennium problems, I do agree that the problems have to be harder. But the issue with AI is that the rate at which the problems are getting harder and the rate at which AI is progressing are misaligned. If you came to a mathematics exam in the 1950s with our big TI-84 CE Python++ calculator, nobody would know what to do. And this is the situation that we are in right now. So I agree with you that it is not the best solution to indefinitely and infinitely limit AI, but it is the best solution to regulate it until our society has found a point to catch up with. And regarding what Arya said earlier, is that AI is infiltrating sources all over the web. The millennium problems, which you mentioned, one guy who solved them, the only one that has been solved, is somebody who lives in a rural village in Russia and does not talk to anyone. I don't think that technology has helped him solve it. But if you did have technology to help you solve it, it is imperative that this technology does not solve the problem for you, but guides you in the right direction, which most LLMs fail at doing. I wanted to come back to what Mr. Greenway said, because you said it's imperative and important to introduce students into the world of AI so that they know how to work with it. And I do agree with that. But at the same time, I think it's really, really important to teach students how to work and interact with people in a world without AI. And if that means that, like you said, making problems harder or maybe forcing them to really do it in front of you so you see that it's not without like without the work of AI I think it's really important that they learn that they actually have enough in their minds and they don't get into that cornered mindset that basically if they don't understand something the first thing is to go to NAI because I feel like that will limit their growth capacity. And if they grow up with the thought that if they can't do something first time or the first two times, they can't do it at all, I think that will be very detrimental to the human race in general. And if we can't keep growing as a community, then I don't know how we're going to keep discovering new things unless it's AI discovering it for us. Just real quickly, I'd like to add on to that. When I suggested as a teacher I need to bring AI into the classroom and teach students how to engage with it, I also want to set an example. I'm a historian, and as a historian, I'm a writer. And for me, history and writing, it's a process. Learning is a process. And I want to model this behavior, this love for learning, and understanding the limitations of AI. And that's why I brought a prop. It's a book. I'm a historian. I read books. And I asked AI yesterday to answer a complex history question for me. It's a question that historians have been debating for 112 years. And AI will answer that question for you, and it'll give you two perspectives. It'll show you the different perspectives, and it's gonna do this in about a thousand words. But if you really wanna to answer that question, you got to read books. This is one book that's answering that question. And I enjoy reading this book and I enjoy not understanding but wanting to understand. I don't enjoy getting a 1,000 word synopsis that's been regurgitated by some computer and some of the information in it is verifiably incorrect. So my point is I would want to model a love for learning and a skeptical use and a careful use of AI as a tool that has very specific uses in my academic world. I just wanted to mention another point is we had a similar situation in Europe, in the Scandi countries, where they completely abolished all physical writing, analog work, and they switched everything to the computer, and that backfired miserably. The students forgot basic tasks, and then when they were sent out to the world, they didn't know how to do basic things that are very important. Like, I don't mean forcing students to use cursive, to write cursive or anything, but AI, as it is now, is not ready to properly engage with a person to help them learn, because if it's not able to tell you that you're stupid to your face and only agree with you, then it can't give you enough negative and positive feedback to help you understand or learn something. Can I respond? I want to respond to actually both of you now. So first, Zoe, what you said is I fully agree with you. It is misaligned. It's a learning process. The only way you can make AI more powerful is by giving it more information. That's all it has. It's a huge database. So as bad as this sounds, AI has so much potential we can't even imagine yet. If it takes a few years of schools having students cheat by using AI, then unfortunately it sounds bad, but so be it because that is called learning. You're going to have bad times. There's a chance in five years we're going to limit AI, and in 20 years we'll look back and say, oh, why do we limit AI? That was so stupid of us. It's a learning process. No one knows how to fully use it, and eventually we'll figure out what the right balance is, but it will take a long time. I mean, artificial intelligence is barely three, four years old. And what you were saying is that because it only agrees with you, of course you can say it depends on the prompt. I strongly believe in 20, 30 years, schools will have prompting classes where you learn how to get out of AI what you want. If I, for example, something I do is I let artificial intelligence, after I'm done with work, check over it and give me like a predicted grade. You can phrase it the way, phrase really nicely or do it the way I do be as harsh as possible it depends on what you ask it to do it is very powerful but you just need to know how to get your information. Max I do agree with you in the sense that it is a learning process and we have to adapt to it but you are acting like AI is the what is it called the flood the Bible, where all hell breaks loose and there is nothing we can do, but there is something we can do. Governments have legislative power. If they can make tech companies unify the charging plug-in for all of our devices, they can probably impose some regulations on how to properly handle AI. I also believe that AI is not a person, obviously. AI can act like a person and talk like a person but I'm going to, for Mr. Greenway, draw the comparative that it's similar to a book. If you learn something or if you try to learn something from a math book, you see the process and it's written out perfectly. AI is not perfect, but it acts like it's perfect. It tells you a perfect step sequence. But this is not what human learning is about. If you have a teacher or a tutor teaching you, they will tell you, oh yeah, if the question is phrased like this, that's what they're asking for in the context of especially the IB, because they like to phrase questions in a tricky way. This is something that helps you understand and make your learning process faster instead of jumbling over the same thing and not understanding why. So I think that AI should not be replacing teachers because only teachers and educators who are people, who are human, can accurately provide the human connection that is needed for long-term learning. Can I ask a question to, you had a similar point now to Aria, because you said, oh, websites, and you'll get to that website that has AI in it. Why regulate it? Why? This sounds really stupid, but it is quite true. We get a lot of our fear of AI from movies, because movies have portrayed, oh, AI is this big bad thing. It sounds stupid, but it is true, and you only see that, oh, AI is going to take over the world. I do agree that it has harmful potential, that if you don't use it properly, it can be bad. But I'm saying why be so afraid of it? I'm sorry. Can I respond real quick, Max? I think you're oversimplifying where people learn. I don't watch movies. I know that sounds weird. I don't watch movies. I don't watch television. I haven't watched a movie in 20 years outside of the classroom. I'm afraid of AI. I don't watch movies, I don't watch television. I haven't watched a movie in 20 years outside of the classroom. I'm afraid of AI. I'm not getting that information from movies. I'm getting that information from academics I trust, not movies. And Max, this may sound counterintuitive, what I'm about to say now, but some AI and some inscription of AI in sources is bad because people are bad, and people produce data. And this data that is used to train AI is depending on, like let's say in the medical context, for example, there is so much gender, race, age, whatever bias, and this is reflected in the data. And a human might be able to look at a data set and see, oh, this is biased data, this is prejudice data, and you have to take it with a grain of salt. An AI is not human. An AI cannot see that. And they will regurgitate this in their sourcing until it gets lost. AI models are trained by other AI data sets, and you will not be able to either see the original intention, you will not be able to see the original discrepancies, and you will simply just accept a flawed data set as a perfect data set because AI suggested it to you. And this is just a concrete example of this. It is not a hypothetical. People tried to do so. In an ER where patient management was a big problem, they said, oh, you fill out a questionnaire, AI will process it and then you get a scale of how important you are. And we know this for a fact already that Afro-Americans or women have to be sicker to get the same amount of care as men, which is not just. The thing is, when AI was trained on this data and it kept happening, Afro-Americans had to be in a grave condition just to get the mere attention of a doctor if they, because the doctor would just say, oh yeah, this patient XYZ has these problems and is this important, I'm going to see them. And people were in grave danger. People got hurt because of that until it was shut down. Okay, our next statement is, people who don't learn to use AI will become unemployable. If you agree, step forward. Three, two, one. So AI has become so omnipresent in our society, omnipresent in almost all areas of our life, where if you don't know how to work with it you just won't stay afloat in the job market in society yes there's a difference between relying on it using it and or just being able to understand how to work with it but to not have any idea of how it works it's like an example an old stuck-up like very old stuck-up person not wanting to stop riding a horse because the car was invented you have to get with the change and that's how if you work as a collective that's how you can make the most improvement that you find the correct balance where you can work with it and not just deem it as unnecessary and ignore it you you will have to use it if it's so omnipresent I agree with that so again I'm going to use teaching as an example when you are learning how to teach learning how to become a teacher you are confronted with a hundred thousand different tools and you need to learn how to use those tools and of course it's beneficial not only for yourself in making yourself a better teacher in getting used to how to use these tools correctly it's also the same of your training your brain to learn something new to adapt to something new and I think that's the key word is adapting it's not about including it or not including it it not including it. It's making sure that you know if you would like to include it, that you could include it, and that it's going to be something that inevitably invades the job market and that you will need to learn the certain skill, whether you're a new teacher, a young teacher, or any other job. If you're a new employee, old employee, everyone has the same prerequisite of trying to learn new techniques in order to improve at their job. And I think that's going to be the key word is how do I adapt this? And how do I learn about this? And how do I use this in a job? Obviously, there are certain exceptions. Like if I don't think if you don't know how to use AI, I don't think your chances of becoming an oil rig worker or construction worker are going to go down. But for 99% of the jobs, it's just going to become a normalized thing. I don't think anybody here would respect a mathematician who doesn't know how to use a calculator. So it's just going to become a regular part of our job lives. And it's here to stay. It's not going anywhere anytime soon. And with anytime soon, I mean in our lifetimes. So we might as well work with it. And I think a lot of companies will start working with it. Real quickly earlier, I made the example of the book and I might've come across as an old stooge, you know, who just reads books and doesn't watch movies. I also came today's session with some research I had done. Again, this goes to learning to use AI as a tool, not as a crutch. I generated knowledge or a list of sources that I could read using an AI search engine. So I used it as a tool. I didn't read the AI information. I looked at the articles that AI was recommending that I read, and then I read through those and cut and paste some text to sort of prepare for this dialogue today. So, again, I agree, you know, is it going to make us unemployable? I think that will depend on what job we're looking for employment in. But I, as an academic and as a teacher, I realize it's there. I'm having to integrate, learn, but I'm trying to be careful. So yeah, I came with, yes, I came with a book, but I also came with a work I had started with by generating the information from an AI search on the internet. So I'm not a stooge necessarily, but yeah. People who disagree disagree please step forward so i want to start about talking general about like ai in workforces and as we know they will and they are already taking over a lot of jobs and the argument is being made of course we don't know we can't validate it um if more jobs will be made possible through that or if more job openings will be made and of course those job openings will likely be linked to AI so then knowledge with AI will be something that you need and a core value in something in the future jobs. However, I do think that even in the future, there will be jobs that actually value human knowledge and emotional, so EQ, over some kind of knowledge of how to act and how to talk to a chatbot or how to work with AI. For example, there was actually a survey done asking people if they would rather be examined by a human doctor or an AI. And I think more than three-fourths of the people said they would rather be examined by a human. And I think everyone sitting here would probably agree because having someone look you in the face and tell you what you have and it being a human seems so much more comforting to us than if you read your diagnosis on a screen and I think yes doctors would need to work with AI but I do also think that their basic skills with humans will be valued over their abilities and their capabilities with AI. I very much agree with Sophie. I think that there is a critical difference between not needing to use AI in your job and it being imperative to not use AI in certain parts of your job. For example, as Max said, if you're a construction worker or like a pig farmer in the middle of nowhere with a non-automated chicken coop, then you don't need to use AI. You would not gain anything from AI. And have you ever seen an old person trying to learn how to use a phone? I think that's pretty much how trying to train someone to work with AI, who doesn't get it, would go. And then to the part where it is imperative not to use AI in certain parts of your jobs, like you said, if I was being examined by a doctor and they started opening up ChatGPT and Googling my symptoms, I would leave. And I think this is because in times that are difficult, not just healthcare, but also law, politics, what makes it so special is the human thought and the human connection behind it. So I think that not using AI in these sectors will make it stand out more and have a more positive impact than if you were to do that. I definitely agree with that. I just wanted to say that I am 100% sure that everyone here in this building knows how to put in a single sentence asking for what they want in sort of AI. Anybody that doesn't know how to do this either just doesn't need a job. You do not need a job if you don't know how to do that. And I don't think that it's really like it's a toddler if you don't know how to write a like a single sentence asking what you want in an AI you were a toddler you don't need a job and second of all I feel like okay let's go back to like if someone forgets something in their job that is very important like okay I quickly forget something what are you going to do are you going to go straight to AI ask oh how do I do that how do I do that no you're not there are like. There are YouTube tutorials online by actual people made for these scenarios. The people that put in the time and effort to make these sort of videos and these sort of tutorials for people that will forget and not using that will just sort of negate the time and effort that they put into those, which is important. It's important to just see the effort and see all this time and effort that's put into it. I think not using them is stupid. It's very important to mention that we should maybe stray away from thinking about, oh, if you know how to use AI, you're going to use AI. Because in our world, there's always some malicious people who have bad intentions. So if a person doesn't know how to work with AI, how to spot certain AI things, things could go very wrong. In a company, when someone took an AI source or reads the first summary that Google spits out and takes this as a fact, things can go wrong. People can get hurt, money can be lost. And it's important to distinguish of knowing how to use AI and using it to having basic knowledge of AI and not using it. We should think of it as a word program. It's important that you use word, but nobody's going to have you drawn up and quartered if you use Google Docs instead. I also want to say it's also not actually not possible to use it everywhere. For example, one, I can speak this from experience, my father's company, it's a production company. And for production companies, if you have a company like, let's take Porsche, they produce a lot of the same cars over and over again. And in a business where you're producing a lot of the same things, I could imagine that AI can be very useful when you're doing the same job over and over again. For my dad's company, however, they make industry-sized ovens, and every single one of them is completely different to the other ones, and they take three to four years to produce. So even if you want to use it and even if you know how to use it, it's not always helpful yet to use it. I want to go back to Matteo, what you said, I think that the core factor, or the core question is not knowing how to use it, but actually also knowing when to use it, and this also comes back to what you said, as you said, maybe like negating like the people and the YouTubers or people who write studies who put in the time. But I think we're generally seeing a trend where people value the quick dopamine kick that they get when they put in the question and they get exactly what they're looking for. When you have a question and you start maybe reading a study or you start watching a YouTube video, it might mean that you just watched a YouTube video for 20 minutes and you still don't have your exact question answered. And I think that a lot of people don't want to go that route that they spend so much time and then maybe don't even have their question answered. So I think that it's just really important in every job for us to see you can, I mean, as it's been said several times, but you can use it as a tool, but it should not be what you're mainly relying on. Also, Max, to what you said, that AI is definitely here to stay. And Matteo, to what you said, that you don't have to use it. I agree with both of you. However, I also think that there is a separate way that this could go, not into AI being an all-useful, omnipresent, all-powerful digital being. I'm actually going to use an example from the early days of the Internet, or what I presume to be the early days. I wasn't there. Zoom to be the early days, I wasn't there, is WordPress, which is like most large language models, a way to simplify your thought process and your thinking process. Because you don't have to download a code or like a coding platform. You don't have to use a lot of bootstrap packages, whatever. You can just make your website nice and easy. However, in the IT world, if somebody tells you that your website looks like it was coded with WordPress, that's kind of an insult because it seems that you did not put in the effort, you did not care enough about the task to actually devote yourself to it. I think this could happen with AI in the employment. If somebody reads your application to a job, they see the, which is unfortunately, which used to be an indicator of proper grammar but is now an indicator of AI, so the hyphen dashes and the Oxford comma, if they see this they immediately recognize your text as AI, they might not want to employ you if AI does take the route to becoming an undesirable easy way out. Just real quickly, I have a friend at the university I used to work at, and he does graduate admissions to the history department. So he sits on a panel on there considering who's going to be admitted to the graduate program, masters and PhDs. The applicants have to supply a number of pieces of information including essays that they wrote. These essays are now being run through algorithms to determine whether or not they were written with the aid of AI. And if they were, if they're deemed to, or if they believe they were, those student candidates are rejected out of hand. And so again, you know, if we're using these tools, there can be a risk. You have students who just graduated with their bachelor's degree. They got straight A's. They're wanting to get into a master's, PhD program. They use AI to write their CV, to polish their CV, to write their admissions essay and they're getting rejected out of hand because people just don't want that in that environment. In that specific environment they want people who are writing their own work. Okay, I hate to cut the discussion short but I think we should move on to the next statement so please stand up and go back to your places. Okay, our next statement is AI chatbot worsens writing and creativity. Three, two, one. The way that AI is built, it gives the most probable response to what you are asking it for. Meaning whatever response it gives you is going to be a very average answer and a lot of teachers notice when they have an essay assignment, that a lot of students will have the same essay that's reworded slightly differently, the same ideas, the same structure. This is because AI has no diversity in its thinking and response. It has the same structure for almost everything and will very often find the most average. So generally, this will lead to a high lack of diversity. So generally, this will lead to a high lack of diversity. Additionally, AI has a lot of hallucinations. So AI hallucinations are when AI completely makes up information, makes up citations, and presents it to you very casually as if it were true. Now, if you look at these hallucinations and you try to dig deeper into the internet, you will notice that there is absolutely no source for this information. AI is just giving you what sounds probable, what sounds like would be the next thing in this writing that you wanted to do. And this is very dangerous because it spreads mass misinformation. And this actually goes back to Max when you asked me why I want to limit the use of AI at the moment. It's because of this misinformation. This misinformation leaks into so many sources that that misinformation becomes information. You will look at a reliable source. It will have that hallucination. And since that was a reliable source, it will end up being trusted by too many people. And this goes, this is something that has been happening before AI as well. So there was a myth that some organ, I think the muscles in a human, an adult man, human, if you took the muscles and you completely stretched out, you go around the world eight times. And this was used as a fun fact in so many contexts, in school textbooks, in essays. And it has just been looked into recently and traced back to find that the original source sourced a citation that didn't exist. And this happens much more often today with AI. Yeah, I mean, I agree. That's literally how it works. It just calculates what's the most probable first starting word, and then it goes through every word in the English dictionary, and then it just asks, okay, what could the most probable next word be? And then it generates a text off of that. I do see, though, that with more information over time, and if we give it more information and prompt it in how to properly write it, this will take forever, it has the possibility of being better, but at the time and in the near future, I would agree with the statement. I wanted to talk about a video I watched once where someone, where this guy was explaining how making a product, making art is sort of a mountain that you have to cross over. Like there will be ups and there will be downs, but in the end you will learn and then you will create your product and you'll be happy. And that the main point of the video was that AI is not necessarily destroying creativity, but it's destroying our patience and how patient we are with wanting to actually do the steps needed to actually create something that we want to and honestly speaking um i think that it's not only destroying patience but also destroying creativity because why pay 10 euros to um support this local artist that makes really fun stuff and i can just go to ai and ask them to make the exact same thing for me i don't spend any any money, and nothing bad happens to me. But then that just really destroys the local artists or people that make stuff online that have a small budget but really want to do what they want to do because it's fun. And I think that just destroys people wanting to do their dreams and wanting to do what they want to do in life instead of having to do a 9-to-five like everyone else does. Okay, great points. I think we should move on to the next statement since everyone agrees anyway. It's not very important, no? If you have a strong answer, just say it. So the biggest problem is that all that AI does, it gives you regurgitated slop over and over again. And if we lose the value that we have of human creation and only value the thing that's faster and faster, we are doomed as a society, honestly. To add really quickly onto what Matteo said, AI slop is not what the people want. There is literally a website called youraislopboresme.com where you can pretend to be AI and draw things for other people because people want creativity and they want human interaction. Ever since the dawn of time, our main thing has been, art has been stuff that we have made ourselves. So I don't think we should stop that just because modernization is a whole new trendy thing. If we look at archaeologists and sociologists defining societies and cultures, one litmus test is art. And art is human creativity. And I agree, I don't want to sound hysterical, and it's easy to sound hysterical, but I do think once we devalue a human creation creativity and process driven learning and process driven creativity I don't want to say we're doing humans or humanity but I think that's a dangerous step in the wrong direction a long term I think the value in art comes from intention. And a lot of people argue that AI art is bad because it looks bad. But unfortunately, since it is developing so fast, eventually it will look good. The reason that I think no matter how good AI gets, it won't be on the same level as human art is because of that intention of those brushstrokes, of those specific symbolisms, and the fact that a human thought of this, I wanted to portray this and bring this to the world. And AI might have a narrative that they will represent in its art, but it will never be equal to a human opinion, in my opinion. Great. Okay, let's move on. Great. Okay, let's move on. Next question is, does the value of AI outweigh the environmental damage it causes? Step forward if you agree. In one, two, three. Well, step forward if you disagree. So I want to start by saying that, in general, I feel like we've already seen and we can generally feel like that AI is already taking over many sectors in the world, but I think it will take over more and more sectors, which means AI will become more and more powerful and therefore need more and more water and other resources as AI obviously I think it also needs like the cooling stations to make sure it cools down as it is like huge databases and that will also probably require deforestation and everything and I think that is basically the issue that us humans that would basically show us and portray us as the most selfish creatures as we are putting our need for efficiency and our need for getting information quickly without having to do our own research over the well-being of not just us but other animals and plants that live on the world and if it keeps going then we won't even be able to enjoy AI for a long time because there won't be anyone left to enjoy it. I agree with that completely especially considering what people use AI for. So they ask AI, okay, what should I cook for dinner today? Or what's in my fridge? Tell me what to cook. And those small prompts already cost so much energy that contributes to global warming as such, and I don't think that enough people are aware that these small prompts, that they cost this much energy or that they're inflicting more damage. You may say, okay, I will counteract that with driving by tram or by train or taking public transport, but nevertheless, if you use AI or overuse AI for a minuscule tasks that you could technically do your by yourself then you're contributing to global warming to climate change and inevitably making the planet off worse off there you go and this is just in general but we've learned or haven't really learned from history or in any areas of life the ends can't justify the means in this case we can have our vision of ai o helping us with all these things of making making public transport systems so much more efficient however to get to that point to expend so many, to do so much. Like what Sophie said, that what if we don't even manage to get to that point and just expend all the resources, expend all the water that we have inside of this goal? It's not a sustainable solution that we can have. Just to have this fever dream, oh, what if all the good things that happen, if we just give it enough time? oh, what if all the good things that happen if we just give it enough time? And like when, this is a quote from Apache Warchief, the time when we cut down our last tree, when we fish our last fishes from the lake, and once we cut down our last trees, we will realize that we can't eat our own money. I think that was a very philosophical answer, and I agree. I also agree with you, because I just wanted to throw in something that I read. This was quite a little bit ago, but Sam Altman, the CEO of OpenAI that owns ChatGPT, he actually made a public statement asking users to stop saying please and thank you to ChatGPT, he actually made a public statement asking users to stop saying please and thank you to ChatGPT because that wastes so much energy and I thought that was kind of funny. Yeah, I agree with you and with you especially. Also, concerning what Sophie said, concerning the resources and the water that AYA uses, It's not only water. If you read any newspaper articles or were online in the past few weeks or months, you will have noticed that there is currently a RAM crisis. And I am not a computer person, but Timmy explained it to me, so. RAM is being bought up by all these AI companies for their storage. And aside from the economic implications that this could have, something that you need to make RAM is silicon. That's also why Silicon Valley is called that. And it's a metal that you can only get from mining. So when the RAM market either inevitably crashes or needs to expand more, we know that mining, especially silicon, which is not very abundant, can be dangerous and environmentally harmful. This is also a factor to consider, I think, especially because mining often takes place in low-income countries or developing countries which rely on the primary sector to fuel their economy. I think this could lead into exploitation. And when the resources inevitably run out, like you said, an economic crash like the tech bubble burst in 2008. Okay, thank you. Now let's move on to the next question. Okay, so now we will be moving on to the final question of our debate today, and that is, we should slow down AI development. Step forward if you agree in three, two, one. I think it's extremely important to slow down AI development because what we're currently doing is just throwing ourselves in the deep end and hoping we'll float up. And we can see this especially in the economic sector, where everybody's hopping on this bandwagon of AI and it's growing, growing at such a rate where if it crashes, it's not only going to impact the big governments, the big companies, it's gonna impact everybody on such a, on every centralized stage in our lives, where most of us, we won't be able to afford a laptop because RAM is too expensive. Or then after it crashes, there's going to be these abundant data centers that are just going to collect rust. So if we maybe scale it back, we can moderate it and we can guide it into a better direction than what it's currently looking like. scale it back, we can moderate it and we can guide it into a better direction than than what it's currently looking like. I feel like the question leaves a lot of room for interpretation. I agree with Matteo, but I also think that it is sensible to realize that with money and profit being the driving factor in our economy and in our society, it is completely unrealistic to expect companies to slow down their AI development. However, it is upon governments and legislative powers to try and regulate artificial intelligence. When the internet first came up, when there was no regulations, it was like the Wild West. A lot of very disturbing, very illegal content surfaced because there was just nobody to guide and to regulate to say what is and isn't possible on the internet. I feel like a similar thing should be done with AI because expecting it to slow down or stop developing is unrealistic and will not happen. But in the sense that governments and the governing bodies that are supposed to care for us when this is such a prevalent topic, I feel like they have an obligation to obey that oath and that purpose. I just wanted to add a small thing. I feel like if we continue to increase the rate at which AI is developing, I feel like that is too much of a risk, like a risk we cannot take right now. We just had a discussion about how much resources it would take to continue doing this. The consequences and all the risks and all the stuff that would happen if it were to crash if the economy were to crash for example and um i feel like instead of because regulating it will also be very difficult i feel like because you mentioned like all the legislative powers would should be able to but that's always not 100% concrete fact. I just feel like it's too risky right now. We've been able to live without AI for decades, and it's worked out most of the time, most of the time. But I think if we continue going without AI, I think the society will remain mostly the same, and it won't have too big of an impact, a negative impact on society. If I could just add a real quick comment. I'm not naive. I understand the complexities of trying to regulate and enforce regulation. I'm not naive. But government's first responsibility to its citizens is to protect its citizens. but government's first responsibility to its citizens is to protect its citizens. And when we see something like AI in the sexualized content, that's something that really concerns me. There's obviously things being done with AI that violates existing laws, or things are being done with AI that laws need to be created and enforced to make sure that it doesn't happen. In this case, I'm speaking specifically of sexualized content. The government's job is to protect its people, and I think the government has to step in and find ways to regulate and enforce regulation, especially, in my opinion, when we're dealing with that kind of generated content. Preach. I also wanted to say that, of course, I don't want to repeat again and again that regulations might be difficult to enforce, but there's also several articles that I've seen that have actually claimed that ChatGPT especially is starting to rewrite its own code which I feel like is such like that's actually the main basis of the problem because once we as people lose control of the own of our own creation I think that's when we have reached the point of no return and that's the moment when we really I think in that moment of course we're going to reflect back and see all the mistakes we've made but if we can already see the first steps and the first signs of this take over from AI within itself I think it's time to act and I don't necessarily want to go back to the creativity thing but one more thing is I feel like AI is generally replacing human values such as creativity and kindness and caring because AI does not have feelings and AI can say the right things to you to make you think that it has feelings and to make you think that it cares about you and it says all the right things but it does not and I think we need to keep that in mind that we as humans are there to support each other and we need to make sure that we stay as a community and AI should be a helpful tool on the side but should never take over or become more important than our sense of belonging and our sense of community? For me personally, I am scared to death about what's happening with AI currently. With thousands of billion dollar deals happening with the governments and AI companies, especially the big one which happened with AI and the US Pentagon, about how they went out of a deal with Anthropic, who makes an AI, because they said, we are not going to let our AI use to make autonomous killing machines. And then OpenAI comes and say, oh, I'm fine with that and which we don't have like we don't have 14 points we don't have any packs which say oh i can be used for this and i can be used for that we have our current laws but they don't currently they don't suffice of how they are now or let's not think about the war part when i apply to a job and the AI will look at my CV and will check my internet history and maybe 10 years ago I liked a post of rival company and then I don't get the job. I am scared that it's going to start to infiltrate our lives in the certain sectors where it has really nothing to do there only for the sense of optimization and oh we're going to save some time while doing it but then to lose out on a lot of things and fleece out a bunch of people of opportunities for the sake of for optimization making things just move faster can everyone who disagrees, please step forward? If nobody steps forward, then can everybody please step forward who hasn't already and take a seat in the circle? We would like to do one final wrap up round where I would like everyone to state two sentences, maybe additional comments, things that were unanswered or strong opinions they have regarding AI. Let's start. It's exciting and frightening at the same time what AI can do. I am very excited for the future with it. And I think it's something that we should be more welcoming to rather than afraid of. And we just have to know that we will have more challenging tasks that we are able to do with it i agree with i agree with max it's definitely frightening um what some ais can do and like especially with the sexualizing thing and making stuff up that doesn't actually exist i think it is frightening but i feel like what we have to do as a society of humans is just not only adapt, but try and go as much as we can around it. Not against it, but like try and shimmy in between. I feel like currently the way that AI is at its current level, it has too many drawbacks to be such a major thing and I think it is very valuable to continue developing it for many different reasons however I don't think that that should come at the expense of our current time and generation losing skills and becoming too dependent and all these ethical issues that come with it. So I feel that AI, although it has so much potential, at the moment is at a very bad point of its development. And yeah, it is currently at a very bad point of development. I am scared of how people are currently using AI. And I'm also extremely worried about how the people in charge of it, how the people regulating it, and how the people who are directing it are refusing to take accountability of how they're doing stuff. And also I'm ashamed of us as people, how we refuse to use it either properly or how we lack the accountability to distinguish between proper and fair use or just absolute plagiarism for it. As an educator, ultimately, I'm concerned with AI's effect on my students' thinking and on the classroom. As a human, I'm very concerned with AI and AI's effect on how we think about ourselves, body image, the sexualization of the internet, things like that. For me personally, in my life, there are more drawbacks to AI than benefits. I believe that AI is a powerful tool, but it is in the hands of those who control it and those who legislate it to make sure that we will not be consumed by it, but instead lifted up by it. I think that as scary as a future of AI basically taking over may sound, it's very calming and nice to me that as a group, we've basically reached the general consensus that AI shouldn't go too far and seeing that a group with still difference like difference in opinion can come to the agreement that AI shouldn't take over everything still shows that we're not ready to basically give over all of our decisions and let AI make every decision for us. And I think that the most important thing is that we still stand for what we think is right and then hopefully the people who do control and can make regulations for AI see that and can make decisions that we cannot make for AI to make sure that it does not cause any danger or lasting harm for the future. I think everything's been quite, everything's been said. For me, I think the infiltration of AI is inevitable, but we shouldn't lose what makes us human, namely our critical thinking skills, and that we should remain aware of our surroundings of AI. And I think other than that, it'll be okay. Okay, thank you so much everyone. That concludes the end of our debate today. So whether AI improves or ruins our education, I think depends on how we choose to use AI. Thank you for watching and thank you to all of our debaters for sharing their opinions. We hope this discussion gave you something to think about and maybe you changed your opinion. See you next time. you