Thank you, Servus, for organizing the festival. Yes, I'll present my project, which I just finished very recently, called Dasha's Kitchen, My Magical Grilled Cheese Sandwich. So it's important to note the context here, and it might not become very clear from the beginning what I'm talking about and how the grilled cheese sandwich kind of plays into this. But so Paris Marx already talked about this during the keynote. the country was erupting with protests against the retirement reform, which was eventually passed regardless of the fact that the citizens were very clearly very against it. And even though the reform was actually passed in March, the protests continued well until early June. And so during this moment, a lot of the attention was focused, of course, on this retirement reform. And the French government kind of used that space to pass another law that, in my opinion, went kind of unnoticed. So on the 19th of May, the French government passed a law regarding the different measures which will be taken in order to ensure that the Olympic Games, which are happening in France this summer, go smoothly. And in this law, you find information about stores remaining open on Sunday for tourists, or about doping measures for sportsmen, sports people, and about this new handy security measure that is already being used in France even beforehand, which is called algorithmic video surveillance. Also can be called intelligent video surveillance oric video surveillance. Also can be called intelligent video surveillance or augmented video surveillance. And so I have to admit that I was quite late to learning about this law. I actually stumbled upon it about six months after it was already accepted. And the reason I found out about it is because a curator approached me and asked me if I had a work on the Olympic flame, to which I said, of course, no, I don't. because a curator approached me and asked me if I had a work on the Olympic flame, to which I said, of course, no, I don't. But basically because the Olympics are happening in France this summer, even the cultural funding had to somehow incorporate sport and Olympics. And I never wanted to do any work about the Olympics. It was not an interest of mine. Neither are sports particularly when it comes to my art practice. but I decided that I would do a little bit of research, see if there was something that I could find that was interesting, and if I could eventually make a project about it. And then I stumbled upon this law, and I told myself that it would be a huge shame if there was an exhibition that was sponsored by the French government that didn't have a work that was bringing this artificial video surveillance to light. And the exhibition space, which is an old squat, was surprisingly pretty thrilled about this, so they let me do it. So I didn't want to make a work that was just about the specific surveillance technology. I wanted to introduce a bit of a wider critique of AI as well. So I started doing some research. So of course, we already talked about this in Celena's keynote. So I'm not going to talk so much about Alphonse Bertillon. I'm sure you all also already know about him. But the reason why he was interesting in my research was because some people consider him to be the father of biometric identification, which seems to be one of the main debates about this, I'm going to start calling it AVS because it's a long title, so AVS, Algorithmic Video Surveillance Law, because under the GDPR law, EU countries are actually not allowed to process biometric data. So the French government assured us that they won't be using any biometrics, which actually already lots of privacy activists had to fight for because originally they were going to use facial recognition and other biometric recognition. So, yeah, this is one of the kind of most famous privacy activist groups, I would say, in France, La Quadrature du Net. So they talk actually about the fact that it's been years already that the French government has been trying to change the meaning of biometrics altogether in order to signify a very specific kind of surveillance or a very specific kind of recognition, such as fingerprint identification or iris scans and when I was reading this I was actually thinking yeah that's kind of the impression that I had in my head as well but actually biometrics mean personal data relating to physical physiological or behavioral characteristics which offered the possibility of confirming someone's identity which is actually a very vast definition so by this definition a lot of things like your gait, the way you walk, maybe even the clothes that you wear could be considered to be biometric because they allow you to be recognized as yourself. And what the AVS offers us is a surveillance where algorithms will analyze video streams to find abnormal behavior and then also report it to a human, a human guard who chooses what to do about this information. So here you have the official report describing this new technology, which I mainly actually only placed here for a different reason. I don't expect you to read this boring report in French. The main reason I placed it here was to show you that in this official report from the French government, they start off with the explanation of what AVS is by using the classic AI spiel, which I'm sure we've all heard many times, whether in newspapers or in exhibition texts, which goes, on the one hand, technological advances call into question the effectiveness of the resources that law enforcement agencies need to accomplish their mission. On the other hand, they raise questions about the ethical limits that must not be exceeded at the risk of sliding down the slope towards a society under automatic surveillance against which literature and cinema have warned us. And next to cinema, as you can see, there's a footnote that says, see, for example, the Steven Spielberg film Minority Report. And I have to say, if I had a euro for every time that Minority Report was brought up in a critical discussion of surveillance, I would be a very rich artist right now. It's a reference that I kept seeing everywhere, was constantly popping up. But I have to admit that finding it in this super official report that was submitted to the National Assembly was still shocking somehow. And it was also funny to see them saying, we have to make sure that we don't find ourselves in this ultimate surveilled future that is predicted by Steven Spielberg. But look at how great our technology is! Isn't it fantastic? To give you an idea of just very briefly of how this technology works or will work, is working already. So originally the surveillance in France used to look like this. So you have Centre something Urbain, Urban Supervision Centre, which looks like this. So you basically have human guards sitting in front of tons of screens looking at tons of feeds of videos from surveillance cameras at once. Of course it's really hard to find if something is going wrong when you have like a hundred video feeds in front of you all the time. And this is the kind of the point that the French government used a lot to sell this technology. They basically said, humans are not keeping up with the amount of surveillance cameras that we place all over the country. And instead of reducing the amount of surveillance cameras, they tried to sell, or they succeeded to sell, an algorithmic surveillance. So yeah, now what's going to happen is instead of having all of these screens with humans looking at them, there will be one screen with probably one person sitting in front of it that is black. And the person is just sitting and waiting for something to pop up on the screen because instead of humans looking at the surveillance feeds, it's going to be an algorithm which will be looking over the surveillance feeds and then then detecting abnormal behavior, and then showing it on the screen, and then the human guard has to decide what to do about this, whether to call the police or to intervene or whatever other options there are. So the problems here are so many, I don't even know where to begin. So from the fact that it's tech startups, private companies that are making this technology, providing these tools, and also they're the ones who get to decide what abnormal behavior is. So it's not even, I mean, I think the government surely has a role in it somehow, but I can already tell you that one of the sort of tech startups that is specialized in machine learning algorithms, algorithms, machine vision algorithms, 22, said that it would not be able to detect or that it would not allow for humans on the ground to be detected. So if you're just a human on the ground, I guess the thinking here is that it could be a homeless person. We are not going to detect them. However, we will detect a person on the ground following a fall. And so this is something that is a private tech company. This is what they are kind of telling the government, we are not going to detect what you are wanting us to because we think it's unethical. Also problematic is the fact that these startups are just getting masses and masses of free data from the government. Basically, I mean, these are surveillance cameras that used to be only available to the government or to the police. And now all of these private tech companies can train their algorithms on masses of free data. And of course, we cannot admit the fact that a human, I think, is much more likely to trust what a computer considers abnormal. So when you have something, a machine that is logical, telling you, I think this is wrong, a human is probably going to be like, yeah, you're right. So anyway, the list of the problems here can go on and on. But as I said earlier, I didn't want this project to be just about surveillance. So after rejecting the idea of doing any kind of work related to AI for a few years, I thought that it would be actually important to include some things that I find problematic with AI in this project. But the problem is that I find so many things problematic with AI that it was kind of hard to choose what to actually focus on. So for the research behind this project, I decided to mainly focus on the human labor hidden behind AI. I'm sure we've all heard many things about it, so I won't go too much into it. But I also like a term that was popularized by Astrid Taylor called photomation. I think it's something that describes very well this kind of hidden work that is presented to us as automated, but actually is done by humans in order to promote this kind of idea of obsolescence of human workers, which is also often done by using the language of magic. And so we find this language of magic in Arthur C. Clarke's books, where he says that any sufficiently technology is indistinguishable from magic, but also in texts that try to promote this algorithmic video surveillance. So here we have once again the startup 22, the directors of the startup saying basically that the magic of computer vision is that it's infinite and the only thing that can prevent its full capacities is just your imagination. something is magical or magically produced also hides the cost of its production while reinforcing an image of an extremely complex and obfuscated technology operating like a black box mechanism that is beyond our understanding, which is actually what my previous project, Advice Well Taken, is about. And here's a bit of selfless promotion about my project, which is downstairs in the basement. So if you haven't gone to see it yet, I will tell you quickly about it. So to explain briefly with this project, I was documenting tech lore. So for me, the way that I understand tech lore is all sorts of beliefs that we all possess about the way that we understand or don't understand how technology works, basically. So, for example, thinking that your eyes will become square if you stare at the screen for too long, or thinking that a gas station will blow up if you use your phone at the gas station, or thinking that your phone can sense your brainwaves and sends you targeted advertisements based on your thoughts. These are all things that are documented within this project, which you can see downstairs, and there's also lots of postcards, so please take them. And I believe that we all have some kind of story like this. We all share these beliefs in order to regain some kind of agency over these extremely complex devices. But okay, let's get back to the grilled cheese sandwich. So you might be wondering how I ended up with this topic based on all of the AVS research. So at some point, I had read so many different texts and got kind of lost. I wasn't really sure where to start making actual work from all of the research that I was doing. And I decided to go backwards and to basically just look, okay, how do people define an algorithm? How do people define artificial intelligence? And I stumbled upon many videos on YouTube and also articles online that describe an algorithm as a recipe for a grilled cheese sandwich. And I think if it was just the notion of a recipe, I wouldn't have even thought twice about it. But the fact that it was consistently a recipe for a grilled cheese sandwich made me think, okay, maybe there's something there. So here's one example of such a video. An algorithm is just a set of instructions. It's just like a recipe. A more complicated algorithm, called a machine learning algorithm, can help artificial intelligence learn through identifying patterns and considering the results of previous experiences. Yeah, and this video is not the only one. I mean, I kept finding very many videos with the specific metaphor of a grilled cheese sandwich. So then it became kind of clear to me what I needed to do. So I came up with a strange idea, which every time that I pitched it to someone, they just started looking at me very weirdly, of making a video that starts out as a tutorial for making a grilled cheese sandwich, but becomes a loop of reaction videos like you'll find a lot of on YouTube, with each video moving further and further away from just a simple tutorial and getting more into the critique of AI by using a specific example of this algorithmic video surveillance in France. And the video is around 15 minutes, so we don't have time to watch the whole thing right now. I will show you a compressed version of a few of the clips, but if you want to see it, I can show you a link of where you can watch it afterwards. Hey everybody it's Dasha from Dasha's Kitchen and today I'm going to show you how to make the ultimate grilled cheese sandwich. To make a great grilled cheese sandwich you can use almost any kind of bread but my personal favorite is this kind of soft but structured bread because it's a bit of a sweeter sliced bread. When you notice the bread is lightly Даша's Kitchen – очень успешный кулинарный канал с тысячами подписчиков и самыми простыми рецептами. Но даже если она показывает самые простыми рецептами. Но даже если она показывает самые простые рецепты для самых простых блюд, это неважно, она умудряется испортить все. Кстати, а знаете, мне кто-то недавно сказал, что искусственный интеллект это как рецепт для сэндвича с плавленым сыром. Я знаю, что звучит это странно, но я абсолютно не шучу. Je sais que ça a l'airemple que nous avons ici en France. Mais je me dis que c'est peut-être pas si random, comme elle le dit, qu'elle aborde cette question, parce qu'un peu plus loin dans la vidéo, il y a une pub sponsorisée et cette pub est payée par une entreprise qui s'appelle Follies. Mais c'est comme par hasard une entreprise qui a beaucoup de solutions algorith algorithmiques, notamment dans tout ce qui a un lien avec les mesures sécuritaires. Même notre influenceuse n'est pas à l'abri de ces technologies. Dans un entretien récent, la directrice technique, donc la CTO de OpenAI, n'a pas vraiment pu expliquer ce qu'ils utilisent comme base de données pour leurs algorithmes générateurs de vidéos. for their video algorithm generators. What data was used to train Sora? We used publicly available data and license data. So videos on YouTube? I'm actually not sure about that. Okay. And I'm here to set the record straight because as some of my diehard fans already know, I actually have a degree in computer science from MIT. Of course, God knows what data those softwares are trained on, and how easy it'll be to change exactly what they're looking for. Minority report, anyone? No, seriously, it's going to report on minorities, which our French speaker very correctly points out, just before proposing her own courses on the exact tools that are probably implemented in these technologies. And in the meantime, don't forget to like the video and subscribe to Dasha's Kitchen. And I will be seeing you next week where I will teach you how to make pita breads. And no algorithms involved in that recipe, I promise you that. See you next week. Bye! Thank you.