Welcome to the Critical Data Research Group afternoon. So we had first presented actually some master students works, then we had PhD presentations. Now a collaborating university was actually presenting and now we come actually to the highlight of the day which is two lectures by, one lecture by Fernando Velasquez and one lecture by Julia Kloiber that we are actually very happy. Fernando, we got to know each other just recently in Sao Paulo when there was this conference, the CIMI conference from Maua University, and I would like to take the chance also to welcome very much Everaldo, who is here with us, Everaldo Perea, a university professor there at Maua. And it looks like that we will also collaborate in future. But what I also would like to hint out is the feminist AI lecture series that we started some years ago it's something where we were starting to think about we actually started it already in 23 and we were thinking yeah how could a feminist AI actually look like so this idea that we look a little bit beyond how AI is discussed, communicated, deployed, and actually also promoted and propaganda-wise, actually also somehow so much that we actually cannot hear it anymore. So maybe I also have to rethink the topic of the lecture series. However, thank you very much that you are participating here and giving us your thoughts. Fernanda, you are from Uruguay. You're based in Brazil, near Sao Paulo, or in Sao Paulo, actually. And I think also your insights into the way how you work with data, how you work with data processing systems and learning systems, however you will call them. So very much looking forward. Thank you very much, Manuela. Thank you for inviting me. So good afternoon, everybody. I have a lot of material to show, so I will go fast. But then I will bring you the PDF if you want it. So, there are a lot of work. First of all, I want to introduce a context. Because we will talk about AI in Latin America, how some artists are using AI in Latin America. And there is, first, I think, a different context from Europe, for example, when you are still managing how data translates in public and in action to make city, to make the life of people. In a way, we in Latin America, I don't know in other continents, are losing this fight. So companies are giving, for example, in many of the most important websites in Brazil, you need to give your facial recognitions to coming in so you cannot buy for example in the the biggest market there without giving your data in sao paulo so there is a practically all new buildings have facial recognition too so even if you are a foreigner you need to give your face because if not you can't if you go to Airbnb for example so we have it's not about about the law we have a very very good law it's invited in the in the European law but in practice it's very difficult to to to manage how to, forget, I'm lost in my own slide. It's difficult for government, for institution, and even for people to manage how to, well, the word was out, but you know, how to control, how to control, you know, this, how to denounce these wrong things happen. And if everybody goes online to the government, say that I can, it's not possible that the only way to enter this website is only giving my data, you know. But this is not happening right now. I think it's something that we are approaching all around the world. In some places the law is more tight and it's easy for this company to get in. So having this in context I want to show a lot of projects. So you will not find so much projects in Latin America working strictly with the data itself as a model because of this constraint, also because of the access of technology in a very deep way, I mean. And the themes that artists are most working on are this one, genders and feminist approach, race and intersectionality, decolonial critique, ancestral and indigenous knowledge, beyond anthropocentrism, interspecies, dialogue, fungi and more than human ecologies. So I will start to show. As you know, Brazil was one of the countries that slave more quantity of people from Africa and this is still in the center of the of the Brazilian soul so this artist is called Mayara Ferrao and she's working in with archival photos about the women's telling histories that were being hidden also I have no accent to my comments here. I will do this to read it. The violence, let me put my glasses too. The violence and stereotypical condition in which black and indigenous people, especially women, were portrayed in the early colonial period, marked in public archives, imaged as exotic, servile, and hyper-sexualized figures. They always correspond to the racist and sexist gays that predominate in society. So Mayara and also artists like, for example, Bretas, he's also a black artist that's taking archived images and animating through AI, Bretas, he's also a black artist that taking archive images and animating through AI, giving life and most of the time projecting them in public spaces. This is Moises Patricio, when people that get slaves, it's different, we need to take care also how to mention that. It's come a lot of technology African people develop for example the metal technology we're in front of any other cultures and we're exported to other places in the world and in Brazil it came also with some kind of philosophical view, it's not exactly a religion but beliefs he's a priest of a religion called Candomblé I don't know if everybody could read I put a shorter statement there. So the hand for, this is a black hand traditionally symbolizing the label of black people that make the country and made many countries around the world he's offering his hand but not as a way of label it's a way of approach of meet each other of recognize each other and he started doing this by hand like 10 years ago by analog devices and now he's starting to do it with AI so in a way you can see that his hand is there and I put the analog ones on the on the left and then how AI was managing his hand and also the possibility to do different kind of approach also as my era mostly the artists that we, black people, black artists, as many other artists in the world, are struggling with money to get in funding. So AI also is a tool to facilitate, you know, the possibility of express these hard and strong ideas in an easy way, also in a fast way. hard and strong ideas in an easy way, also in a fast way. This is another Brazilian artist, teaching and living in the border of US and Mexico. He also, he have AIDS itself, he's living with AIDS, and he's working with, I will not show the video because it take a lot of time But as I said, I can I can give you the PDF and you have also if you wanted the address of the of all this this artist The images are very very impressive. So he's working with the The lack of information about this population in the border, in a very difficult border as this one, between US and Mexico. This is a Uruguayan artist called Leticia Almeida. So she's also, Corpus is a project that questions how generative AI tools reproduce bias and censorship in the representation of body especially female bodies this system are training on a pack data set and govern but hiding rules which often erase what is dreaming inappropriate or taboo from sexuality to violence so this all of us know about that so what he's trying to do you know when you are using this, if you are not using stable diffusion on an open platform, the censorship is the first stuff you reach when you go in any border. Blood or any part of the body that they want you to develop, you have the system closed for you. So she's trying to hack that develop, you have the system closed for you. So she's trying to have that functionality of the systems. On the other hand, this is Lucan Bambosi, a Brazilian artist. He's working more strictly with data. He used a small language model and he... This kind of people with stuff here in the front and the back, we find in the center of Sao Paulo exchanging gold for money. It's a kind of clandestine way of dealing with money. So this is the second manifestation of the work, but he made a lie when he trained the data set line about this is a tree, this is whatever else he wants. So these people go around the city labeling everything and everything is wrong so he's he's trying to bring our attention of how this system work there are a lot of uh for example the scott bias documentary that shows how this algorithm were implemented in cities uh before they we know how they use and even knowing how these technologies are racist. For example, it was not ready that algorithm to recognize actually black faces or other kind of faces that were not in the original data set. This is Gisele Bigeumont. Gisele Bigeumont is also a Brazilian artist and she is working. She did an exhibition last year beginning with women scientists that were forbidden from the history. There are so many, we are doing all day and new historiographies around the world. So, departing from the work of this woman scientist, she made a plant, a flower for each of them, using text to image. And also, she made this portrait. I have no views of the exhibition, but she related also, she's Jewish, and she's also using the way with taxonomies and how most of taxonomy, at least in Brazil, are also, for example, racist. When you take the popular name of the plants, also manifest different problematic stuff relating to how we see the world, how we see the others, how we see even nature. So fast? A little bit? I can share the PDF through Bluetooth. This is Cesar and Luis, an artist from LA, and Cesar is Brazilian. They are working with bio. And in this case, this piece is an algorithm, and a fungi that is eating a book. And so there is a camera that is also filming how the fungi growing inside the book is killing some letters. And it tweets what the new text is tweeted. And at the same time, the tweet is analyzed by the AI and changing again this text. So it's a kind of virus inside the system. Very poetical and also very related to what really happened in this scenario. Let me check something. It's there. I should talk first about this one. This is Sofia Crespo, an artist from Argentina. And she is doing relationships between DNA and data and how we can do metaphors or allegories if the code that made the image and the prompt are kind of DNA and how it passes through the systems. So what she does is she made images, then she compressed the image and made a kind of dna of the image and then he he used the fragment to build a new image so it's a it's a weird stuff so you have an image you compress it you lost information and you bring it again to the system and you and you ask the same system to remake the image, for sure it will be always something different. Very beautiful the videos of the works, we can't see it right now because I'm running. This one is a collective, two artists, a woman from Mexico and one guy. It's called Interspecific. It's one of my favorites. In this case, they are using also live beings to translate the movement and the growing through audio. And also, as in this other piece, where they are creating a non-stop stream, AI stream, analyzing all the most popular genres around the world. So they check the net, they find what are the most played genres, so they have a kind of pattern and then they train the algorithm to make music following that genre and they do a stream that never stops. This is Rafael Lozano Hemer, a very well known artist, it was mentioned before. He's updating an old project called Level of Confidence, now related to this. There was a murder of 37 students in Mexico like 10 years ago, and he made a process. It was very difficult because the government don't say nothing. Nobody knows nothing. There were no investigation, and Rafael this is work that used the algorithm at that time to tracking phase and and match with with the normal phase with with other phases what rafael is right now in argentina also there was a very hard dictatorship more than 30 000 people were killed this was in the 70s in the 80s 80 85 85 and so he updated the algorithm to there are a lot of people missed it some some kids were kidnapped kidnapped and gave it to the to the military so they're still looking for people in brazil in uruguay, in Chile. And what they did is use the same algorithm with the technology of today to match your face with one of these people that is still forbidden, still lost. And they give you all the information about the family, where they're lost, what happened to them, to which party they were fighting for. This is strictly the project that I think is much more the idea of critical data. This is an old project, 2020, from Rui Moreschi, also a Brazilian guy, that started to talk with Turkers that are the people that trained the first, the very first algorithm were trained by hand. You know, now we have models that know how to learn. But at the very beginning, there were people all around the world. It was also mentioned before here. And Bruno, that way, at that time when he was doing the research, it was very hard to find this information even, and also most hardly the people because these were kind of hidden. But he gave it a way to enter the system and to talk with many of these people, bringing their histories to life. And also they make a website. He made a documentary and also a website where you can go in and talk with these people and exchange and know how they live. There is an exhibition I saw recently. This is a tip. In Vienna, who can go to there, from Ito Steyl. It's in a different way, but talking about this, about the people that work for, as I say, I train it in a refugee camp. If you want to, if someone is going to Vienna, it's at the Mack Museum. Most of these people, they don't know what they are doing. Exactly. They have been the lessons to do, to spend, you know, half of the day. And they did some mythology of what they are doing exactly. They have been sent to spend half the day and they did some mythology of what they're doing because they hear and someone say that they're doing this but they don't know exactly what they are doing. This is, I'm approaching the end. This is Lena Farber, also a Uruguayan artist living in UA. And she's interested in lost places in old Mapa Mundi. There were a lot of islands and places that people believed that were there. They're still in the map for centuries. Till now with modern technology, you realize that these places never exist. So he's doing the opposite, he trained an algorithm with Google map data and he put again these lost places or the never existed places in the real maps and make it this kind of islands this is the mostly the end this is an exhibition I'm doing exactly right now in Brazil it was a hard work of months making a kind of what I call a bias chariot I don't know if you or you know about what is a bestiario. It's a medieval, yeah. What was interesting for me is that the image in Middle Age were a tool of indoctrination. You know, people don't read so you learn through image and you learn more, normal people learn more through church, through the Catholic church in the Western world. So my question was, how can we indoctrinate about all this stuff we are talking right now, about we are all of us worried about our future, about our past, about our present, about our future, about our past, about our present, and how we can make a terror is not to cut the cable, but to say, oh look, through image, that is still the major organ through which we live in, even we have other six, five. Anyway, so I make like a chapel with this bin. I spent hours and hours trying to hack in what the AI could do because all of you probably you prompt and it's very difficult to find what you want and it's very, very, very, very, very difficult if you want something very, very, very, very special. So you need to put a lot of effort there. I generate like more than 5000 image that you can see it in this video that is in the in the floor each i made a video when each each frame is one of the image so at the end you have the 5000 image 30 image per second so you see only a, but you can stop it with this pedal, it's a music pedal, MIDI pedal, so you can stop it or you can put it to play again, and there is one effect in the middle. Well, this is kind of the last slide. It's a kind of contribution because I was supposing what I will hear here, what, because of the name of the group and the name of the talks. And I think this concept of the performativity of, it came first from the language studies. So how we perform, how the language, how the way we talk performs reality. If I told you, let's go to have a coffee, we are making words. If I don't say that word, this history will not happen. So this expression of performativity of language were used since it came in the 60s and it's used through all the areas of knowledge and I think that mostly in Latin America we have this history, this recent history, our countries have almost 100 years of 150. So we are learning to live, to exist as countries. And we still have a lot of histories and myths to find, to understand. And in a way, I think that most of the artists are dealing with this kind of approach because of that velocity when corporations are getting inside the real and everyday life there. So that's it. Thank you so much. Sorry for running. It's a kind of strategy. Show more, and then people will find and will research for what they want thank you let me start with a provocative statement we do not use generative AI in our work and I would have never imagined that this statement would ever sound somewhat radical. But judging by the reactions of my colleagues, my professional circles, that's exactly how it comes across. What do you mean you don't use it? Not even for social media posts, for quick research? No. for social media posts, for quick research. No. I'm co-leading a feminist organization called Super Lab. We take a power critical stance at new technologies and we're bringing different diverse perspectives into tech policy discussions. This is us. And almost no day passes without us being asked if we want to talk about AI or feminist AI. And we turn down most of these invitations. Because we prefer to speak about societal problems rather than quick technological fixes. Because this is what most of the time it's all about. like quick technological fixes. Because this is what it most of the time is all about. And here I am standing and starting this talk with the topic of generative AI. Bear with me, I'm going to dive into a deeper problem complex that I deem very interesting and I hope so will you. So why are people asking me with their eyes wide open, why are you not using generative AI? For us, it's really quite simple. And I don't want to sound moralistic, ethically, we just cannot stand behind it. It goes against all of our beliefs and everything we stand for. AI with the power structures and exploitation it embodies represents pretty much everything we fight against. And let's be honest, at its core, AI is one of the biggest systems of human and environmental exploitation of the 21st century. We all know AI is extremely resource-hungry. The motto of the AI race is higher, faster, further. And all of this unfolds with very little regard for the costs in the present while being justified with bold promises about the future. No one truly knows where this is headed, not only the companies themselves. They shield their uncertainties behind grand claims about AGI while striving to be at the top of this race to earn even more money. And as Karen Howe writes in her new book, Empire of AI, brilliant book, I highly recommend it to you. She writes, the doctrine of scaling has become so deeply entrenched that some now treat it as if it were a natural law. Scaling compute is not merely seen as one path, but as the path to more advanced AI capabilities. Entire national strategies are being built around this conviction. More compute, more data, scraping the entire web for texts, videos, artworks, violating copyright and privacy rights along the way, and not giving a damn about that. And for all this scraped data to become useful for training machines, it has to be processed, it has to be cleaned, and it has to be organized. And this brings me to the topic that I want to talk with all of you today. I want to talk about data work. I want to talk about the invisible human labor that AI fuels, that fuels our AI systems. Much of, like I chose this topic for two reasons. First reason, much of the exploitation of human labor that we see in AI is symptomatic for these companies and systems. And the second reason is that I could give you like a ton of abstract explanations of why AI isn't that great, but I believe in storytelling. And the story I want to share with you today is the story of my friend Chuan Kinyua. Chuan is in her late 30s, just like myself. She has a little toddler. She's a single mom by choice, as she often emphasizes. She lives in Nairobi, Kenya, and she's a former data worker. And she just spoke about her experiences being a data worker at a conference in Rwanda. And I would have loved for Joanne to be here and to tell you her story in person, have loved for Joanne to be here and to tell you her story in person but visa processes are tricky. We've tried many times to get data workers from Kenya to in this case Germany and it hasn't worked and right now Joanne is applying for a visa to come to Italy for a labor conference so please keep your fingers crossed and I'll do my best today to tell Joanne's story. And I want to thank you. I want to thank her for sharing that story with us. So let me take you back to the year 2017. That year, Joanne began a job that would change her life. Data labeling. In 2017, hardly anyone knew that this role existed. But for Joanne, it was a dream come true. Finally, she was working with computers. You see, she had always wanted to study computer science, to be recognized as a woman in tech, but her family thought it was rocket science, and all her friends told her that business is the way to go and that she should study international business. And so she did and she even worked in the field. But she never finished because being the first born in an underprivileged African home meant responsibilities first. And deep down, she somehow knew that this path wasn't hers anyways. Then her sister got a job at summer and came home each day full of stories about computers, tasks, and energy. Quick side note for all of those of you who don't know Summer. Summer, formerly known as Summer Source, is a company based in San Francisco. They were founded in 2008 as an ethically outsourcing company. So their mission was providing meaningful, dignified work to people in impoverished countries. And they started with their operations in India and in Kenya and gained reputation as like an ethical company in the space. So they basically function as a middleman between the tech company, like for example OpenAI, and the data workers on the ground. And in 2018, and this is a narrative we know, the company transitioned from a non-profit model to a for-profit model to scale its operations. Might sound familiar, this is also like what OpenAI did. to scale its operations. Might sound familiar, this is also like what OpenAI did. And while doing that, their ethical outsourcing began to falter. So mismanagement became a commonplace and Sama accepted a very large contract by Meta for Facebook content moderation for Sub-Saharan Africa. And hundreds of moderators were exposed to extremely graphic material without adequate protection. In 2022, Time magazine exposed the precarious working conditions and what followed were several lawsuits by the moderators and Sama eventually lost its META contract. But back to Joanne's story. Whenever she heard her sister speak about her job, her heart lit up. Joanne quit her admin role with no backup plan, only the belief that this was where she belonged. She begged her sister to put in a word and eventually she got the call. In training, Joanne excelled. She scored 100% on every test. Out of many who applied she was chosen and that moment reaffirmed something for her. What is meant for you will always find you. Her job as a data labeler was to teach AI models how to recognize and interpret the world around them. For self-driving cars, she and her colleagues drew boxes, added tags, applied colors to everything near and on the road. So pedestrians, other vehicles, road signs, even raindrops splashing on the ground. They were covering a radius of like 100 meters besides the road. The files are so large that the workers need really fast, powerful GPU graphic processing units and the stable broadband connections. And since many of them are freelancers without proper contract, they are the ones who have to cover the costs. without proper contract, they are the ones who have to cover the costs. So imagine someone who's unemployed, young, desperate, has just finished university, lives in a two-bedroom house with like 10 other relatives, and invests their last Kenyan shilling in this computer so that they can do the data work. And if the internet goes down, which happens in Nairobi, they can't work and they can't earn. And sometimes it takes a full week to annotate one file. These like self-driving car files are very large and you have to moderate them in a very, or annotate them in a very detailed manner. So it takes you a single week to annotate a file, and if mistakes are made, the company may simply reject the task and withhold the payment. So an entire week of work ends up in the trash. A complaint is out of question because you don't want to risk that the company doesn't give you any more tasks. Because this is what the system does. The system monitors every click. It monitors how efficient you are, how fast you are, and these data points are fed into the system and decide how often and how many tasks you get. And it's super intransparent. You never know how the data points are weighted. In other projects, Joanne labeled household objects, such as vacuum cleaners, to navigate indoor spaces. And I remember very vividly when she told me that she sees people sitting on their toilets. She sees photos of people sitting on their toilets because their vacuum cleaners are taking pictures of them. And she's annotating living rooms, bathrooms. She's marking, is there a plant? Is there a pet? And is teaching these machines. And what's also interesting, and we had it yesterday in the dual use session, is that the company iRobot, who developed Roomba, developed it as a commercial product and then moved into the military space. So now there's Roombas on the battlefield who are detecting bombs. So from the household commercial product, there's the civilian to military pipeline, and all of a sudden, Joanne is also annotating for military purpose devices. Data labeling is meticulous work, and it's relentless. Long hours, day and night, with no social protection, no medical coverage, and no recognition for overtime. That was her daily reality. Joanne told me that she sometimes worked 20 hours straight for as little as $2 an hour. It was like she was tied to her computer day and night. Sometimes she was just sitting and waiting for tasks because the system rewarded speed. When you're waiting in front of the machine and you're the first one to click yes to a task, then you're rewarded. In the Philippines, companies that do data work advertise the job as the perfect job for stay-at-home moms because you can do it from your home, your flexible hours, but then if you're working 20 hours a day, like, who's taking care of your kids? And they are also advertising it as this, like, you can enter a career as a woman in tech. And oftentimes, and I'll come to that later, this career is really a dead end. And oftentimes, and I'll come to that later, this career is really a dead end. And from Venezuela and Pakistan, we even have media reports about child labor. So underage workers training AI. So this happens as follows. A family shares a computer and an account, and then different family members take shifts labeling data. So specifically during the pandemic, this was a big problem because everyone was staying at home and you don't have a lot of space. You're tired and then your kid starts doing your work. And back to Joanne's story. Joanne excelled. She started as a tasker, moved on to a reviewer, then super reviewer, and eventually she became a trainer. That kind of progression was only possible because of her sheer consistency and quality of work. She was focused, determined, and striving for excellence. But as she advanced, the cracks in the system became harder to ignore. But as she advanced, the cracks in the system became harder to ignore. Early on, she was given increasingly strange and disturbing tasks. Questions about cannibalism, instructions to annotate images of dead bodies. For what purpose? She had no idea. And in an industry where everything is marked as top secret, there was no one she could turn to. Data workers and also content moderators are usually kept in the dark about the project and the companies they are working for. They receive tasks through intermediary companies such as Scale.ai, Appen, Sama. And these tasks are often so small, it's really hard for you to tell on what product you are working. And this can lead to situations where workers are unknowingly supporting projects they consider unethical. An example, in 2024 workers were annotating images for a Russian surveillance tool and they didn't know. They only found out after media reported on it. The NDAs, the non-disclosure agreements that they have to sign are so strict that they're not even allowed to tell close family members what they are working on. And this is extremely tough in situations where you have to process material that is psychologically distressing. in situations where you have to process material that is psychologically distressing. So you are moderating distressing content and you're not even allowed to talk with anyone about it. And this leads to many people developing post-traumatic stress disorder. They don't have psychological support. An organization called Foxglove tested 144 workers in Kenya, ran a psychological test, and all of them, 100%, had post-traumatic stress disorder. I remember when I was walking the streets in Nairobi with my friend Daniel Motong, who's one of the first whistleblowers on these precarious working conditions, Daniel told me that sometimes he just sees planes crashing from the sky. Like he has these horror visions, these haunting images that are still haunting him from the work that he's been doing as a content moderator because he never got the adequate psychological support that he would have needed. For people with this trauma, it's really expensive, it's really difficult to maintain normal relationships to their partners, to their children and also to keep working, to keep functioning. So many drop out of the labor market, become unemployed, don't have psychological support and really bad things happen to these people. don't have psychological support, and really bad things happen to these people. And even for those who do not end up traumatized, the NDA is really tricky because it's a career dead end. They can never put in a CV what they've been actually working on, the skills that they've gained, because they are not allowed to talk about the work that they've been doing. So they spend years doing this work, and then they have a gap in their CV. They were hoping to enter the tech field and to climb up the career ladder, but what it really is, is a dead end. And in Kenya, it's specifically sad because even though we know all of this the president and the government still keep promoting these jobs as like an interesting job for like young people to get and as like yeah a start in the in the tech industry in the tech sector without ever ensuring that the working conditions are dignified for those people so the work is neither dignified nor appreciated and at Republica conference Joanne mentioned that while she was annotating all these self-driving car images she was hoping that once the cars were launched and hit the streets of San Francisco that the Kenyan workers would also get a credit because they've been working on this all along for many years. But as you can imagine, nothing happened. They were treated as if they were not there. Back to Joanne's story. Later as a trainer, Joanne encountered something even more troubling. Joanne and others were instructed to take screenshots of their earning dashboards and to alter them to make it look as if they were earning far more than they really were. Why? To lure more people into the training boot camps and they were even trained on how to do so. Looking back, Chuan says it becomes clear just how deceptive, how exploitative and how deeply broken the system truly is. In 2023, Chuan became a research assistant to the Stanford fellow Behan Taye on AI harms. Her role was to draft questionnaires and interviews for fellow data workers. And what she heard was devastating. One worker was moving from home to home because he didn't have stable income. Another one was undergoing a cornea eye operation for eye strain. And it was clear that this wasn't just individual hardship, that this was systematic. Then in December 2024, Tran's own account all of a sudden got deactivated. A platform she had worked on since 2018, gone overnight. With no explanation, no real appeal process. She had heard countless stories like that, but when it happened to her, something broke. She knew change couldn't wait. And to also give you the background story on this, in March 2024, Scale.ai blocked Kenya from its platform remote task. It had done that in Venezuela before. And for Scale.ai, this was just house cleaning, a regular reassessment of whether workers from different countries were really serving their business. Kenya, along with Nigeria and Pakistan, was deemed to have too many workers who were scamming the platform, who were gaming the platform to earn money. In a striking irony, these so-called scams described by Karen Howe were simply workers using chat GPT to boost their productivity. For while white-collar workers in the global north, such approach would be praised by Silicon Valley narrative and widespread adoption could even benefit the company. Yet for data workers in the global south, whose labor underpins that very narrative, it was considered punishable and offensive. Together with Behan, Chouen began building what would become the Data Labelers Association in Kenya, a non-profit fighting for labor practices, for fair labor practices, the first of its kind. They started with nothing, no security, no recognition. But they weren't alone. They teamed up with a local non-profit called Siasa Place and many other international non-profits. The challenge remains intense. Strict NDAs, union posting, retaliation. We've seen much of this before in the cases of content moderators. I told you earlier of Daniel Mouton, this guy, who broke this story inside Facebook's African sweatshop at Time Magazine when he spoke out about the bad working conditions and he immediately got fired from the company because he also started to organize workers. So immediately when they started to think about like a union, his employer went in and fired him and they also fired 170 other people who were starting to organize. These people are still fighting in Kenyan courts for fair compensation. And while there were many court cases in Kenya, the companies just moved on. Kenya had too many court cases, so they moved to Nigeria and to Ghana and are continuing the exploitation there. We also see union busting happening in Germany when a German moderator called Cengiz Hakşes spoke out in front of the German Bundestag about the working conditions in Germany when doing content moderation for Instagram and for Facebook. His company immediately sent him on leave and this is a way to threaten the other workers and to show this is what's going to happen to you if you speak out about your working conditions, even though it's in front of the Bundestag, in front of your elected representatives. And it's your right because it's part of your free speech rights. And very recent, and maybe some of you have read that in the news, TikTok workers have been striking in Germany. Maybe some of you have read that in the news. TikTok workers have been striking in Germany. TikTok fired 140 workers from their safety and security team in Berlin. So the whole team, basically. They are claiming that AI is going to take over, while I and many other people think that this is really just a strategy to move to cheaper outsourcing companies. And there were strikes going on, and a worker from a different department joined the strikes and was fired. So it was very, very visible that that worker was fired because they were showing solidarity with their colleagues. So Joanne and her fellow activist colleagues are faced with many challenges and they've set out a list of demands. Five demands. They demand fair pay, workers should earn at least a living wage, dignified working conditions, regular standardized shifts, paid sick leave, access to psychological counseling, proper contracts, clear terms of engagement with protections against exploitation, professional recognition, opportunities for training and career development, and the right to organize, safe channels to raise concerns and communicate with management and to unionize without fear of retaliation. management and to unionize without fear of retaliation. Joanne's story is one of many stories of millions of data workers around the globe. We don't know how many there are exactly because intransparency is part of the system, so no one can really give you an exact number. And most of the data work is done in the global south, in poor countries where governments are hungry for foreign investment from richer countries. Kenya and the Philippines, former colonies, former English-speaking colonies, have a long history in servicing American companies through call centers or through digital work. And Kenya's colonial legacy left the country without strong institutions to protect its citizens from exploitation and often struggling with economic crisis. And together these conditions create the perfect opportunity for foreign companies to tap into desperate labor force willing to piece work under almost any conditions. The exploitation of the data workers is the same exploitation that we know from the clothing industry, from the coffee industry. It's similar, but the data workers are very much invisible. So the companies do everything in order to market their tools as magical, autonomous, don't require a lot of labor other than the labor of the Silicon Valley developers who earn six figures. So they're doing everything to hide this work. And working with outsourcing companies is also part of hiding this work because you don't want these workers to be associated with your company. Why is this fight important? The demands of the workers are not radical. They are a bare minimum, like fair pay, proper contracts, and still making them a reality is a struggle. What is radical is that people are unionizing, that they are coming together, that they are creating an international solidarity movement, that they are creating an international solidarity movement, that they are exchanging their experiences, that they are building up pressure together, that they are standing up for their rights. They are the ones whose labour is essential and as a collective they hold enormous power. And they know that and they understand that better every day. And these workers are taking big risks. Even in countries with strong protections, companies are pushing back against them. As civil society organization, we support them. We help them to make their demands heard. We organize events where their voices get amplified, where they can join policy discussions, because most of the time they are excluded from these discussions. And we help them build capacity for their work and also fundraising support. Change is never a simple step. It moves through different stages. Awareness, proposals, policies, then eventually someone implements one of your proposals, then it needs oversight, until finally maybe a market or business model changes. But alongside these pragmatic steps, something more is required. Imagination. Because what is a fight against the system if you don't have a vision for an alternative system in place? If we don't have guiding vision for like an alternative system in place. If we don't have guiding lights that lead us the way. As Bell Hooks reminds us, what we cannot imagine cannot come into being. So we must dare to ask, what futures do we truly want? For us humans, for other species, for the planet. Futures imagined not only from the starting point of technology, but from the starting point of life itself. Let me close by turning back to where I've started. Our decision at Super to not use generative AI in a world where people type every little fart into chat GPT. In this world, our choice may not seem radical. It only seems radical at second glance. And to be honest, it's a privilege. It's a privilege that we've chosen for. A luxury to pay people for certain tasks instead of automating them. That said, we don't pass judgment on others who make different choices. Or as my co-founder Elisa puts it, we're well aware that despite all the dystopian potential, AI systems can help some people cope with the pressure of a capitalist system. We don't want to pass judgment on that. After all, appealing to individual responsibility itself is a capitalist strategy, one that allows harmful business models to keep running under the excuse that you don't have to use it. For us, choosing to opt out becomes not only a first step of resistance, it becomes something much more important. A first step towards opening up a space and discussions about alternative futures. Futures in which human labor is valued, in which exploitation is ended. Futures in which creative writing, design is valued. Futures in which companies such as OpenAI no longer command our imagination, futures without monopolies, futures without fascism, futures in which workers shape technologies, futures in which everyone and the environment flourishes. And coming to the end, I hope all of you will join us in dreaming up these futures and in making them a reality. The Ars Electronica Festival is full of inspirations and your talks today were also full of inspirations. I want us to always remember that change is possible, we are powerful, and the future is not prescribed. Or as the cypher writer Josela K. Le Guin puts it, we live in capitalism, its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings. Resistance and change often begins in art and very often in our art, the art of words. I thank Joanne and her colleagues for sharing their stories with me. This is the website of the Data Labelers Association. Please check it out, spread the word about their campaigns and support them. Thank you so much. Ma, danke. Thank you very much, you both. Maybe you both want to take a seat quickly so that we have the chance to reflect together, together also with the audience. So please invite it to jump here maybe in with some questions um i just want to share something that immediately came to my mind because it's also local artists that did this wonderful work that i wanted to share with you especially with you julia because actually you're austrian but living mainly in germany And there is Andreas Zingerle and Linda Kronmann, Kairos Collective. They did a wonderful project on data workers on suspicious behavior is the title of the work. Maybe you've even seen it, where they were annotating, where they were working on or researching on the workers that had to annotate suspicious behavior which means that everything is suspicious as soon as somebody's running as soon as because they don't have the time to really understand what's actually going on so they had to make as this kind of decisions in a very short time they had to see suspicious behavior even when there was no suspicious behavior but then actually the like in the second step the consequences that came out of these annotations on the one hand on the other hand the consequences of them when they actually did not find something that was unsuspicious was also very much documented in this artwork by Kairos. And then, of course, also this kind of trauma and psychological distressing moments that actually these annotators had when they especially had to work on video files that actually were showing suspicious people showing suspicious behavior. So an extremely interesting project that i wanted to share with you that came just to my mind um yeah uh anything on this topic data workers um something that comes to your mind i was hoping so i mean it's not on data workers directly but, but I had a question in regards of the word future. And who was here today, I really like that people unpack the terms they use because future tends to become a very theological thing, you know? I mean, we had it also with you, AGI, blah, blah, blah. And now what you presented, it seems that everything that you were saying, that it's something for the future, that you're actually doing it now, if I understand it correctly. And so for me, almost this effect that you were, when you threw out this word future, almost overshadowed all the work that you're actually doing at the moment to make the situation now better. And especially in relation to technology, future has a very, you know, can take a very right turn so to say and so yeah maybe i would like you to unpack it a little bit if you want or yeah maybe in regards to the data work what we see is that specifically content moderation for social media platforms the companies are always claiming yeah yeah there's this work that we have to do now but then in the future our technology will be much better and we won't need these content moderators and the reason they are saying that is to prevent us from like looking or improving the working conditions in the here and now right so you always have to remind them but this is the situation here and now, and these people deserve better working conditions. And if these systems are able to do this job in 20 to 30 years, I don't care now, right? It's a distraction mechanism sometimes that the companies use. And I mean, these discussions about the future, we see that with all of the AI companies that are basically like instead of answering the questions they're like yeah yeah yeah but just wait just wait until like this and that is finished right or they make really bold promises about the future and keep us somewhat on hold and try to try to argue that we have to invest all these resources now because then in the future we will eventually benefit off of all of this and then in the future like we're like um it's tricky for the for the climate right now if we're like burning burning all these fossil fuels and building all these nuclear reactors um but eventually like like, AI will become so good that it will help us solve the climate crisis, right? So it's this, yeah, we just have to keep pushing through a couple of more years. And then once we've reached this illumination, because you pointed to the religious part, right, then things will be better. So just bear with us is basically what they are telling us over and over. But I'm curious to hear what you're thinking about this. The future. Nothing to say. The future. Well, always ancestral knowledge, I think. In Africa, many cultures, Nigerian ones, that is the one I know more. They have no idea of future, past, and present. That's something that is linear or is all connected. Because of that, they talk what they can see. But I tend to agree with you. Because language has this. It's helped us to understand us, to understand the other ones, to go further and it's very important because it's a medium through life you know, first you be, then you think and then you so there is all this that we don't know exactly and leave us to think on what is life, what is future what is life actually but then there, what is life, actually. But then there is the actual condition we're living in. It doesn't make sense, you know, that all these people, another example is music, for example, how Spotify, you know, is making profit, you know, if you're a musician, you know. So if you use future in this context, because I also, when you were talking, I was thinking you need to, with no context, because how the people were managing to tag that action, probably they were looking through cameras, or they have no context of all the richness of the environment when this stuff was happening. So future could be also, could be also, well this is another stuff that we do all the time. We separate, you know, all the stuff in compartment, but all things happening in the same time in our lives, in the world, all the layers, you know. And future could be also in this context to think about the next moment, the next moment of me, of the others, and how in this context of AI, I think we are losing a lot, and this kind of effort are very, very important to think very basic stuff for life, to being someone, you know, how to be paid. It's not possible that someone works 20 hours to have some sense. So they need a future, and their future is the next minute, you know. May I also add here again an artwork? Because I actually can't stand the work future anymore as I can't stand the work AI anymore. Have a look on Sputnikos Tech Bros, a work in the theme exhibition, because what she's actually doing here, she's actually deconstructing this kind of stereotypes, how these tech pros are actually defining how our future should look like. And this is something that is wonderfully visualized and feelable actually in this kind of strange uh dialogue between these tech pros that are actually ai generated guys uh that are just like talking about how our future looks like and this is very much in the elon musk style yeah and so much showing us that we are caught in this language about future and how what we think about future that I was thinking okay maybe we have to find a new name for this word yeah or a new term actually so also another wonderful work actually yeah yeah yeah I sorry but actually. Yeah. Yeah. Yeah. Sorry. But I, Julia's talk and my question for Julia, there are only very few companies that actually are doing these type of practices. So my question to you is why don't you spell them out? You always call them companies. In reality, they're like five, six, seven companies worldwide. I think it's like spelling exactly out who they are. I think spelling out exactly their version of the future, contextualizing it with their insane longevity, moving to the Mars, you know, moving to the Mars you know, context they really honestly do give a damn shit about what's happening here so I think that if we are making it very clear who those people are and there are only a few and if you listen to Mark Zuckerberg's you know, early talks he was sweating he had no eye he couldn't talk in front of people. He's a disturbing individual, similar to Jeff Bezos, similar to Peter Thiel, who's basically a racist, similar to Elon Musk. So I think what we need to do is really spell out these individuals behind that entire universe that we are contextualizing, theorizing, you know, etc, etc. In reality, we are basically the ones consuming their technology and making them even richer. And the other is also the politicians that we are electing actually don't know anything about what they should regulate because there is just not enough knowledge about that, one. And the other is also a fear factor. So I think when I listen to all these conversations and I've been in this space for 10 years at least, I think that we are just not aggressive enough spelling it out. And so that people really do understand what's going on. Do you see what I'm saying? That was my question. I'm with you on this. You're so careful. I'm with you on this and I'm going to spell it out more. Thank you for that feedback. So I think what you had to say on data labelers was super interesting and super important and how you approached it. But what I kind of got convinced by is that you focus your argument on gen AI. So we can leave AI completely out of this because we need people to do content moderation. It's not working automated so far. And there needs to be some kind of labeling. and there needs to be some kind of labeling. So I think the worst kind of data work is content moderation probably because of the sheer psychological damage you can get from looking at this stuff. So I would like to know your view, because then you mentioned you don't use Gen AI and you know it's not like based on the individual but how then would you see the future of content moderation in a way of like if you have to imagine it differently I would be very much interested in that aspect because it's not super I mean how you explain that AI is used that you don't have to do the working conditions better now. But what would you propose in a way? I don't know, because I think you have much better insight in this topic. I mean, I really hope that 10 years from now, we're not still scrolling through these social media platforms that are trying to grab our attention, grabbing our data, selling our data to the cheapest bidder. So, I mean, I think we need to be bold when it comes to what do we envision for ourselves, the societies we live in. And of course, we need content moderators right here, right now. But yeah, when you ask me what I'm hoping for, then I hope we're not stuck on on tiktok in 10 years from now um or on like similar um platforms i really hope that things are moving um towards a direction where not monopolies are dominating this space and i think we need to believe in the fact that this could be a potential future, right? Because otherwise we are stuck. Otherwise we can give up right now. But I'd be curious to hear what others are thinking about this, right? Hi there. Congratulations for all the speech here. I come from GlobalSolve. We have right now, in my global South we have right now in my opinion we have a creating other experience about the point of view that we need to take over when we look at the worldwide discussion because in in our opinion in the way that we are looking now we need to create another vision propose another vision it's interesting because come here and see east and west in the discussion and the the west part all the all the time talking about the problem and i love to listen the east part of the globe discussion what care about it since the beginning about the co-creation process and who is the own of the the creation because the professor told us about this point of view and we not discussion about how many people work together to create a tv to create a chair to create everything when we create the art here for example in arts in Ars Electronica, just pop it up, the name of one artist and not all the value chain involved in that. And we don't care about the work in the end of the day. How many people work together to create that to be possible, this chair, the TV, all the process that we have here in my point of view i i'd like to bring to you and velasquez uh something very awesome because you you showed us something that is very wide open on the internet and we don't care or we don't pay attention to it. I don't know why. The problem with these low employees in the data. But I'd like to ask you guys, for example, is not the part of the art to show us another possibility to dream? Is not the part of the art to ask us to create a new kind of platform. The data is not to be a common good instead of make money with that. Because if we not exchange our knowledge here, how can we improve as a society to go further and beyond to solve the problem with the climate change, the problem with the trash problem that we have, the problem with the bias? The bias theory is something that popped up in my mind because it's the kind of trash, digital trash, that is included in our mind all the time. And we don't care about that sometimes and it's not a part of the art to make us decolonize our dreams and open our mind to seeing a different perspective because uh in one way we start to complain and we don't bring another opportunity to dream about in another scales. Once again, I don't know if I'm clear to my question. My question is, okay, we are here in Ars Electronica with many people around, and I'd love to see another opportunity as a global soft part to bring ancestry technology to discuss about. Because I could realize that the problem is the same. It doesn't matter if you use AI. Because in South America, we pass through this with mechanical engineers. When we do car makers, we had the same problem with the labors. And we can't improve it for this technology to another one because the colonization is the same, the same process. I'd like to see from you, Fernando Velasquez, the point of art. I'd like to see from you, Julia, the point of how can I'd like to see from you, Giulia, the point of how can we work together to decolonize our dreams? GUILHEM CASTELONI- Wow. I think that there are 20 and something years people coming here to other places in the world to discuss that, so it's something very difficult to resume in one... Nearly 40. But I think the world as it was in the labor way of thinking, it was destroying since the 70s for these companies, so they will approach all the regulations to take it all out. At this point, very pragmatically, if our government bring that back to people to have paid, this is not something weird you know pay the people and what what they what they need for because you are doing a lot of money but in another way and i will i will do like a turn and maybe something i use ai i use a lot i think that i'm pretty sure that the solution is not to use it because it's making all this damage it's it's damaging me myself and now on all of us but if we do not the if the people that want to change that don't use it uh it will be it will grow through the other past that will be worse than the ones that are using today. You need to be careful about yourself first, I think, because it's impossible to stop. This is how all of us divide. It's impossible to get away of the stream. But you need to manage it. So it's not easy to see that most of the time you are not thinking. And remember when you know the telephone number of all your friends and the roads in the city, in your neighborhood. Who remembers the two roads ahead of your house? Nobody. So this started at the very beginning, in a way, of 2000, or like there, when you start to use mobile phones and then GPS. So it's something that will happen. It's not what I want. It's not the future, maybe, or the today I want, but it's what we have. So we need to deal with that. In all these layers that are together, we are dreaming, we are thinking for a future that we are trying to make together. At the same time, you have all the scales at the same point, and all goes together. To think your next minutes, your work for the school, I will use GPT or not, and it's getting stupid. But we know that in five years, GPT will be no, it's stupid, you know? So I wait five years, and I try to learn how to deal with it. These are challenging that we need to approach it. I'm sure that it's not from outside that we'll manage that. So this is my opinion. I think this can, and thank you for your statement as well, I think this work can easily be overwhelming, right? Like, okay, now let's sit here and envision what XYZ technology or what the future of the colonial world looks like. So that's why I would start in a very local context, right? In a neighborhood, for example, where you manage to bring together like a diverse crowd of neighbors and talk about something that's a real life issue right like how do we envision the neighborhood of the future right like maybe five ten years from now doesn't have to be crazy but then you don't start with this like okay how do we envision this abstract technology right you're starting with something you know really well. And maybe you're also starting with a crowd that you haven't really met before or that you haven't really talked to a lot before, right? I mean, I live in the city of Berlin and I don't know all of the neighbors who live in my house, right? So I would be interested to start really locally and to then practice this collective dreaming. Adrienne Marie Brown, who's a doula, an activist, an excellent writer from Detroit, she started with visionary fiction workshops. So she would bring her community together and then they would start to write fiction about worlds without borders, without prisons, without violence, and utopian stories, right? But they were spelling them out. And I think this spelling it out, this practicing, this discussing things and hearing other opinions that you might not agree with, but you're there and you're listening and you're talking about something that you all care about, your neighborhood or something in your vicinity, I think this could be a starting point. And I'm not saying this is how we fix the decolonial futures bit, but I think this is where we could start and this is also where we could bring others along with us on this path because then it's not this theoretically academic path. Jan, again, and then... Okay. I just was triggered by your question and I just wanted to answer it. I remember this summer, in regards to all of this question of um content moderation and how it is in the future but i remember this summer there was a case of a person who published an article on uh on the wall street journal or something like this that was generated with ai and in this article there was it was like a list of book recommendations, but like half of this was non-existing books. This person afterwards, I think, was probably fired, or if not fired, he was reprimanded also publicly, like everybody shaming this person and so on. everybody shaming this person and so on. But also the publication in itself had a very bad reputation. And in general, if you publish anything that is damaging on a Wall Street journal or anything, you incur a fine or there is laws that prevent you to publish bullshit, no? And I think to respond to this question of thing is that also generally the law needs to change for these platforms because at the end, they're basically publishing platforms in which everybody has a voice, but yet there is no responsibility for what is written on. And if there would be a change in how this is regulated and they would become like a publishing house, they could not first allow everybody, or if a person has to go through writing something, then they would also be responsible for what they write and then there would require probably less regulation but in any case those companies would then because they would be liable for what it comes on the platform in itself they would invest a lot of money making sure that nothing that damages their reputation would happen so I think in a way it is very much also about, and this is also a little bit in response to you, like let's not give too much responsibility to the artists to, you know, solve those things. We have people to which we should go and demand that they do also our public interest. I know it's very convoluted and it requires a lot of time and all this kind of stuff, but sometimes at least at a local level, but also as you were saying, like also just doing this thing at a neighborhood level, I can just, for example, in my hometown in Basel, in our neighborhood, we, I say we, but I didn't took part of it but the other residents there they managed to change that now our street is a 20 kilometers per hour street cars only one way no more parkings and since almost one month now full of people on the street every day and before it wasn't like this i mean it's it's not about technology, but I mean, it really requires getting involved with things and things that are important. Yeah, so thank you very much for all your support in this actually discussion. Maybe can I close with something that we did with the critical data group also in the master level because there is this wonderful publication that you for sure know the data feminism and uh and they have actually just seven principles and we extended these principles and here we have the friction and the neighboring actually in included and this is why it came to my mind i I just wanted to mention to you. So, yeah, Catherine D'Ignazio and Lorne F. Gleiden, they have this, so first, examine power, challenge power, elevate emotion and embodiment. We think binaries and hierarchies emplace pluralism, consider context and make labor visible. And we, from critical data and the master we actually were discussing it and we were adding questioning common ai imaginaries embracing conflict and friction neighboring data and community minding and finding the gaps in ai and deconstructing homophilic circles also very much influenced by what we were reading from bendy tune and this is where we are trying to like work research on create artworks and maybe we find one or the other topic that we will now extend based also on your um yeah on your um thoughts that you actually shared with us so thank you you very much for being with us. Thank you also as audience for staying so long the whole day. And yeah, and see you hopefully soon for the next Critical Data Research Group meeting. And good luck for the PhD students that will soon leave us. Thank you.