and his curator based in Scotland where he works as a senior lecturer in computational arts and technology at Albert Tate University. Research focuses on artistic and activist experimentation with emerging technologies. Tactical entanglements, a monograph on AI, art, creative agency and the limits of intellectual property. Here is the book. Was recently published, was just published, I have to say. When was it published? Six weeks ago? Was just published with Maison Prech. Martin's work on digital art in relation to AI and or blockchain is widely available in books, including Artists Rethinking the Blockchain and Money Lab Reader 2 and published forthcoming in journals such as Philosophy and Technology, Culture Machine, Leonardo, and Media Theory. Welcome Martin Zeilinger. Try again? Okay. Thanks for the introduction. Manu, also, thank you very much for facilitating the invitation and having me here. Yes, I am from Linz, but I have left many, many years ago, and so that makes it a lot more exciting even to come back for this context. Yeah, this book just came out. This is not a book presentation, and I'm not really talking very closely about things from the book, but I'm touching on it tangentially. And there is a few copies, and if somebody wants one, I'll make you a good price. So, which, without further ado, I'll jump right in. Early in the afternoon it was so bright in this courtyard that I thought I better make my slides really high contrast and take out most of the images. Now actually it looks like it could have been okay with more images in, but here we go. So I'll be talking about artificial intelligence and machine learning most generally, and more specifically about artists who approach these technologies in what I like to call a tactical mode of operation, and who do so in ways that I feel are useful for thinking through problems of bias and agency specifically in relation to the issue of algorithmic adjudication and so these are the two concepts that will tie this discussion together for me today the first one being algorithmic adjudication, by which most generally I mean the outsourcing of decision-making processes to computational agency, and the second term, the second concept being what I will describe a little later on as bias contagion. As you will see, partly this talk is also a bit of a meditation on terminologies and sensibilities of the pandemic. Maybe this is unavoidable. It's thought-provoking and useful. Maybe this is also because this is the first time I'm traveling internationally in quite some time. So I'm speaking from the really strong conviction that in the context of critical data, digital art has some very important roles to play. And these roles are complex, they're multifaceted, they're dynamic, they have to do with experimentation, with delineating and pushing limits, and with formulating critical positions outside of the cultural mainstream So so one of the important roles in my mind that artists to the artistic experimentation with AI Has is to resist this illusory idea of a kind of pure innovation Which in my mind always just really kind of seeks to assimilate experimentation into the so-called creative industries. Another role is to interrogate the corporate mainstreaming and commercialization of emerging AI technologies. This is something that's been discussed a little bit already today. And then a third role for me would be to expose the integration of AI technologies into various kinds of governmental and policy-oriented control regimes. Ultimately, what this adds up for me is a fostering of critical literacy of data in digital culture more broadly. In a book that came out about two years ago, Nick Dyer-Witherford and two co-authors formulated this really dark notion that AI possesses a kind of inhuman power, and that with this power, AI may soon maybe end up emancipating capital from humanity, rather than the other way around. In other words, these authors frame AI as a fundamentally capitalist technology. Without doubt, that is an important position and an important perspective. But what I'm additionally interested in is exploring what I would call tactical uses of AI that actually work against commodification, corporate blackboxing, and propertization. So while Dyer-Witherford highlights the inhuman quality of AI, the work of AI artists very often actually shows precisely the human dimensions of artificial intelligence. For example, in exposing biased data labeling labor, or also the human agency that remains involved in designing all kinds of algorithmic processes. So my point is not at all that AR should work with anthropocentric perspectives or that it should anthropomorphize AI. Instead, my point is that it is only by holding on to human agency in AI contexts that we can also enter maybe into new and very interesting kinds of post-humanist agential entanglements with AI. In the context of art making and digital culture, questions about agency and about the human qualities of AI, for me, play out maybe most interestingly in relation to issues of creative expression and authorship. This is really also very much what my research is about. So, what does it really mean in the context of AI-driven art making to create something? Who or what is a creative agent in AI contexts? What is originality in the context of AI-driven art? And by extension, what does it mean in the context of AI-based art? And by extension, what does it mean in the context of AI-based art making to be an author or an artist? And then from this question, we're really just a very small step away also from asking what cultural ownership really can mean in the context of AI. So what is intellectual property in an AI context? What does it mean to own something in the age of AI? Let me go on to connect this introduction to some of the problematic implications of algorithmic adjudication. I already defined a little bit what I mean by algorithmic adjudication. It's most generally the outsourcing of decision-making processes to computational agency. And a starting point for thinking about this a lot more would be the really excellent work of Antoinette Rouvrois on algorithmic governmentality or in algorithmic governance. So in digital art and AI contexts, a focus on algorithmic adjudication can, for example, lead us to the following problem. If there are concerns about who authored an artwork or about whether an artwork is original or also about whether the copying and recirculation of an artwork is legitimate, can these questions be adjudicated algorithmically? Can these questions be answered by an AI system? Now let me give you an example to illustrate what I mean here. The example is a tool called Content ID, which many of you probably have had exposure to in some in some form Content ID is an AI driven digital rights management tool that is used to enforce the copyright policy of the YouTube online video platform essentially what it does is to compare uploads to entries in a really gigantic database of copyright protected content that Google has put together. And then this Content ID tool makes deliberations about whether or not the upload infringes any of those copyrights of the materials contained in this database. So Content ID is designed on the belief that an artwork must be characterized by uniqueness and singular authorship. Everything that looks or sounds or behaves like something else in the database cannot be an original artwork. And can therefore also in this very limited functionality of Content ID, it can therefore also not be a legitimate artwork. So the system is designed to flag and remove presumptively infringing media uploads by enforcing this assumption that creative agency is incontrovertibly linked to ownership by way of singular authorship. And by enforcing this view algorithmically, Content ID helps to kind of integrate all the uploads into the property-oriented media ecology of YouTube. The tool enforces a very restrictive and narrow perspective on what expressive agency is and can be and what ownership consequently is. And from the user perspective, this ends up translating into a bias against some of the most interesting, to my mind, characteristics of creativity, such as the fact that it is hugely relational, dialogic, socially embedded, and not at all about the singular artist figures or the unique existence of single artworks. So this is what I would describe as bias contagion, the way in which users of YouTube are inevitably habituated to the very restrictive ideas about cultural ownership and copyright that result from the algorithmic adjudication of the content ID tool. So bias contagion really refers to the way in which bias can spread through networks and communities by way of algorithmic processes. But in theory, copyright law actually aspires to accommodate this complexity of creativity, its relational and dialogic and socially embedded nature. For example, almost everywhere in the world, the concepts of fair use and fair dealing actually form exceptions to copyright restrictions, and they are designed to allow creative practices of copying and sharing and reusing for all kinds of purposes such as criticism or parody. In other words, in theory, the law tries to accommodate the complexity of creativity by acknowledging that our understanding of creative agency actually relies on all kinds of shared cultural norms. But algorithmic adjudication relies actually not on these socially shared norms, but on executable binary rule systems, not on norms. So in the law, copyright exceptions that permit copying and reusing exist in the form of very complex norms and standards that are flexible and context sensitive and purposefully lack clarity. So that human critical thinking and intervention is required every time to make an informed decision about whether or not something being copied from an original is actually a legitimate purpose of copying that thing. Now, unfortunately, an algorithmic adjudication tool like Content ID inevitably flattens and simplifies these complex standards and norms into computable rules. Otherwise the system just doesn't work at all. So in my view what this indicates is that in the context of algorithmic adjudication we're witnessing a shift away from enacting new kinds of non-human agency in AI, and very often of course that's the promise of creative AI or of AI art. And instead we see a move towards enforcing human non-agency through AI, through tools such as content ID. Okay, after this long preliminary discussion, let me contextualize this with artistic experimentation with AI. In the example of Content ID, the system spreads and reinforces kind of a self-perpetuating, self-amplifying bias that favors very old-fashioned, outdated notions of authorship and cultural ownership. But artistic experiments with AI can actually, I believe, mobilize this process of bias contagion differently for critical purposes, precisely in a mode that I would describe as tactical. I derived this concept a little bit from Rita Riley's notion of tactical media, which doesn't really seem to have a lot of purchase in media art context anymore. But I take it much more so from Michel Deserteau and his theorization of the distinction between strategic and tactical modes of operation. operation. According to Desautaux, strategy serves to, it serves administrative purposes, managerial purposes, it serves to, it serves capitalist agendas very often, and it does so by drawing on system-inherent control architectures, very often with the purpose of shutting down critical or divergent elements that might exist within a system. And by contrast, tactical modes of practice represent a kind of much more open-ended resistance that oftentimes emerges within a system. In this sense, tactical AI is likely to resist strategic approaches that would tend to, in AI contexts, would tend to black box knowledge, close off access, or reinforce very narrow conceptualizations of agency. So what I'd be interested in thinking about further is whether bias contagion can manifest as more than merely a problem in strategic deployment of AI. In other words, how can we think of bias contagion as a tactical maneuver? So I want to say a few words about this very interesting AI art installation that I also talk about a lot in the book by Canadian artist Adam Bazzanta and the project is called All We'd Ever Need Is One Another and what this project does is in my mind really beautifully expose the absurdity of rule-driven copyright enforcement of the kind of algorithmic adjudication that Content ID, for example, executes. So what this installation, what this art project is about is essentially a system of, or a setup of tabletop flatbed scanners that are set up in such a way that they scan very abstract light patterns that are reflecting off of their surfaces. It's a fully randomized system. The artist has really no agency in controlling what the scans look like. So it's a very carefully designed system that shields itself from the purposeful kind of artistic intervention of the artist himself. Very much for the purpose of being able to claim this as a fully autonomous kind of art factory. And that's also how the artist describes this, as an art factory. Now, that's the first part. Now, that's the first part. The very important second part is that there's an AI-based tool built into this that compares the scans that the system generates by itself, compares these scans to images of existing artworks that it can find online. And when it finds a match, it follows this, you know, convention from appropriation art to name the scan after the original artwork that it matches. Importantly, the parameters for what a match is, or how a match is determined are entirely designed with a mind for machine legibility and not human readability, right? So as you'll see in a second, a match that the system identifies isn't necessarily human readable for us. The original and the scan might not look like each other from our perspective, from a human perspective, but the AI system believes that it does. Now, the reason I'm using this example in this context of AI adjudication is, of course, that one artist whose work was presumptively copied by the system ended up suing Adam Bazanta for copyright infringement for actually a quite substantial sum of money of almost 150,000 Canadian dollars. And this is still pending in the court in Quebec and it's an ongoing legal struggle. So here on the left side is the scan generated by this art factory. And on the right hand side, a photograph, a heavily manipulated photograph called Your World Without Paper that was created by an artist named Amal Jamandi in 2009. And in the opinion of this art factory, the scan is an 85.81% match with this original. And therefore you have this, you have the title of this work. I don't know if you can read it on the slide, but the title of the work makes reference to this relationship in which these two images presumptively are existing. And it's on the basis of this that Amal Jamandi sued Adam Bazantas. So this copyright complaint about Adam, there's a lot more to say about it. I will skip this maybe. What I want to focus on instead is a discussion of this tactical element of the work, because, of course, the system is designed to provoke these seeming matches and kind of throw them out into the world. So what the project does very beautifully is to expose this absurdity of the conservative idea about authorship or also cultural ownership that would manifest strategically through tools such as content ID. such as Content ID. The algorithmic adjudication that Content ID as a tool executes works strategically to produce allegations that cannot easily be contested. Many of you might have found this when you've tried to upload something to YouTube and before it ever even goes online it disappears and is flagged for probably a very bad reason, like there's maybe some incidental music playing in the background on a radio or something, or many other reasons that might actually violate your fair use or fair dealing right to upload and publish this creative expression, which the system just cannot understand and cannot accommodate. So, Content ID works strategically, it produces allegations that are very difficult to contest. And a project like All We'd Ever Need Is One Another, on the contrary, works tactically to provoke infringement allegations that are not easily justified and this is still ongoing in court and it's going to be a hell of a hard of a time for Jamal Amandi to prove that this is indeed a copyright infringement I think it's probably intuitively seems absurd to you I hope as it does to me so what Besantis does is he undermines to you, I hope, as it does to me. So what Byzantus does is he undermines conventional approaches to copyright by injecting a very complex notion of relational, dialogic, embedded authorship and creativity into his AI system. And what this really brings about is that it actually provides him, the way he uses AI actually provides him with a number of quite reasonable defenses against these copyright infringement allegations. And one of them could be for the artist to say, I didn't do this. the algorithmic processes by which these works are generated are, they are sandboxed so carefully and shielded from me so well that I have absolutely no creative agency in generating the image or also in finding the match or also in naming the piece and then putting it online. This is all happening more or less autonomously. So I think very reasonably the artist could here argue, a little bit tongue-in-cheek, but nevertheless, could argue that it just does not make any sense to describe him as the creator of this piece, and therefore it also makes very little sense to accuse him of infringing a copyright of another artist. What this points to, in my mind, is that Adam Bazanta in this project plays with this notion of a very complex assemblage of expressive agencies. He might be one of them, but the scraper tool is another. This array of flatbed scanners is another. So it really becomes kind of an interesting post-humanist entanglement of expressive agencies of which he is a part. But that certainly could not be accommodated by the very straightforward notions of authorship and copyright ownership that the law is, for the most part, working with. So this is raising interesting philosophical questions that I've already noted earlier about what it means to be a creator or to create something or what makes a work of art in an AI context. But it also raises very practical questions. And these have to do with, for example, the legitimacy of data scraping, with a huge gray area concerning corporate data, data mining for purposes of machine learning or AI training purposes. This is something that intellectual property law internationally currently has no idea what to do with, whether it's permissible or not permissible, and so on. And yeah, so all of these are issues of computational agency in relation to notions of authorship that are maybe no longer functional or no longer very meaningful in this newly emerging context of presumptively creative AI. So overall, the project, to me, I think a good way of summing it up would say that it contaminates this relatively rigid system of IP law with a very dynamic and a very fluid idea of creative agency. And the result, again, is this type of bias contagion that can help expose very serious issues with algorithmic adjudication. algorithmic adjudication. So in this discussion of putting side by side Adam Bazanta's piece and the content ID tool, I focused specifically on authorship questions, on copyright questions, but I think it's also possible to trace this idea of bias contagion into very different contexts. And there are many relevant examples of AI art projects and practices where artists proceed, again, similarly, by injecting data into existing AI or machine learning ecologies and by tactically infecting these ecologies of commercialized, normalized, biased, proprietary data, injecting it and infecting it with critical difference. Always, at least with an implicit focus on facilitating critical literacy of data, of critical literacy of underlying AI and machine learning technologies. So one of these examples, I'll put the slide up now, and Maya also is here, is this very fascinating to me, wonderful, interesting art project called Non-Brute Force. I highly recommend maybe you talk to Maya about it directly rather than hear me try to describe it and also go see the newest iteration of the project that's exhibited at the JKU University campus. But in essence, the way I bring it into play here, let me just tell you about one detail of the project. One aspect of it is that there is a collection of biometric data or vital signs and signals of a dog that are then uploaded and injected in the largely black boxed and proprietary data ecology of the Apple Health platform. And this, of course, is a collection of biometric user data that partly works towards offering AI-driven recommendations for healthy behaviors and workout routines and so forth. But this uploading of alien or foreign data, this contagion of this environment with this data, allows for really very intriguing reflection, again, first of all, of data ownership and algorithmic adjudication, but then also about speculation concerning the potential unforeseeable effects that this contamination of the Apple Health ecology might have? What kind of bias contamination can the dog data trigger when it infects the Apple Health platform? What strategic biases are here being exposed in that process? And what tactical biases might here be injected into the existing data environment. For a second example, and I think this is my last slide, I could point to Jake Elwes and an ongoing project called ZZ, which Rosemary already mentioned earlier on. Jake's work, and actually I think he's at Ars Electronica as well, and there was some kind of online event related to this project earlier on this afternoon. It's in a way about querying AI. The work enacts a very deliberate tactical querying of data set politics, again problematizing the normative aspects of established perspectives on agency. And so this project in particular revolves around AI-based generation of gender fluid androgynous drag performers. And this is done partly by, again, injecting alien data into existing data sets. So injecting difference, in this case, into biased normative data ecologies. Specifically, Jake uploaded or included a thousand images of drag performers into a large mainstream data set of portraits that is often used for machine learning purposes. And the resulting outputs, and this is an example of that, they very powerfully expose the biased assumptions concerning gender identity that tend to be encoded in these mainstream training data sets that are commonly being used for AI-based image generation. So the ZZ project shows that in strategic modes of practice, data set bias is very likely going to accentuate and perpetuate and even amplify normative sameness. You know, when we see an AI-generated portrait, you know, and it just looks so real, and oftentimes that means it looks like a bland version of everybody else. And that's precisely not what the purpose is here, or that the goal is here. So whereas in strategic modes of practice, the sameness is accentuated and amplified, and a lack of diversity is kind of normalized, what happens here is that this is very powerfully exposed. in that way kind of shows very much in line with the aesthetics and politics of drag overall, that there is a radical difference that can be provoked when bias contagion can be mobilized tactically. So to wrap it up, there's no doubt that one crucially important ambition within contemporary digital art practice and discourse is to highlight, expose, and if possible eradicate systematic or systemic biases in AI and machine learning. But the train of thought that I've been following here is that perhaps another important ambition should also be to mobilize the concept of bias contagion critically to focus not only on ejecting bias from ai systems but simultaneously also to focus maybe on injecting critical counterpoints to the kinds of biases that we can see at work everywhere around us. And so in this sense, the question to end on is how can this work at scale in a bigger sense? And maybe where can we see this phenomenon of critically deployed bias contagion already at play? Thanks. Thank you very much, Martin. We now have the chance to reply to you. And I would like also to bring all the speakers from today to the stage again. We have some chairs prepared here on the side. So please, Ian, please, Kleantis. I mean, I know that you immediately have something to reply. Who else was here? Rosemarie, please. But also the audience. And Maya. Maya, do you want to add something? I mean, your work was here on display. Well, maybe later, because unfortunately I was not able to attend from the beginning. But I would gladly add when the debate is going to be more. Anybody else? Yeah, there is Michelle missing. Too many men sitting now here. Absolutely. Actually, it's also me here. In the program, you're supposed to be in the round table as well. Exactly. I will stand here. I don't need a chair. I can sit here. Oh don't need a chair. I can sit here. Oh, thank you so much. Thank you so much, Giacomo. Yeah, I mean, I had a short talk also in between with Kleantis. And, I mean, Kleantis, maybe I give you first also the microphone if this is okay, we had I mean you are working with data and especially also the projects that you are that you presented and so this double phase of working with data, working with data actually in the way of okay how to control it which kind of data are going into a project like the Nicosia project and so on, and how to actually understand. So what is actually for the good and where we have to be very, like where is this kind of situation where we're actually under this virus situation and how to control them? What would be then your reply to this? That's a very difficult question I mean it's not black and white I don't think anybody could argue here the value of machine learning and data and how this can help and contribute to the society in general but again on the other hand, nobody could argue if it's not somehow, not controlled or regulated, but understood properly, it might get out of hand. And I mean, I'm stuck now to the last example that Martin gave. I mean, before when you were talking, I was thinking about the content ID, but now the only thing that is stuck in my head is how biased can these algorithms be. And I mean, that's a very nice example, and it's clear. But if you go into the details of machine learning algorithms and how you can manipulate or how these are evolved, bias is pretty much from the beginning. It's how the weights are in those. I could give emphasis to a specific feature because that is of my interest. So by infiltrating and polluting let's say the data with something that it was definitely that this would happen. It's a clear thing. However, if we move towards a little bit of more transparency in the algorithms and not working with black boxes and if we actually are aware of how everything works, maybe this, I think you concluded on that if I understood correctly, that these biases can be somehow guided towards better and more evolved machine learning and artificial intelligence world. Instead of, I mean, I don't know if it's clear what I'm trying to communicate here, but it's still, I'm feeling difficulties to expressing, but we could use the same technology in evolving that technology. And the deeper understanding of how these technologies work are the key and the transparency in that are the key to or monitor it or regulate it or even guide it to becoming a better machine if that makes sense if I can maybe respond right away those are a number of different responses that you're all bringing together. Regulation, monitoring, transparency, they're all very, very different approaches, aren't they? They're not very similar to one another. I mean, they're all important. I don't know if I would favor one over another or say there needs to be a mix of those. I mean, the provocation I tried to formulate was to maybe run with the bias if it's unavoidable. I was recently talking about a similar topic at actually a critical legal studies conference, and they did this experiment there where they used all of the papers presented at the conference to create an AI bot that was maybe knowledgeable about critical legal theory based on all the material that was fed into it. And of course it's not at all. It's absolute gibberish that comes out where you can recognize maybe some fleeting moments of quotes or concepts from different theorists, but it's not knowledgeable in any form or way. But it really helps to expose that these biases are absolutely inevitable, and that the outcome is an amalgamation of the input. And that's not something that AI art very often exposes about itself, or not often enough. I mean, that interesting kind of AI art does so, but oftentimes that isn't really the case. I mean, the example, Rosemary, that you gave of the portrait of Edmund de Bellamy and the claim that it was created by an algorithm and not by an artist, that's the absolute opposite, right? Like, that's the claim that there is no human agency or bias involved, and it's completely absurd to try, I mean, to claim that. to try to claim that. Maybe, I mean, when media artists and urban researchers sit on a panel, we have to decide on which common ground do we have to talk about. So whenever I hear concepts from media artists or other epistemic communities with whom I have very little touch, I translate it immediately into my own brain circuit. Right, so it's, it'll be very difficult, I can't talk about art with you for sure, but when I hear agency, when you say the loss of agency of an artist I would say I would think the loss of agency of a citizen of a city so that's that's the only way that I can talk but maybe that's our common ground right and so if I think of I mean data is the gold of the 21st century we know that AI is the entity which is playing with data and doing something with data, transforming into services or making business models or whatever. But the essential thing that AI does, and now from the perspective of an urbanist, is taking over the decision-making processes. And when I, on the personal level, if I take my phone and I want to choose a restaurant, if I think, should I go to this place or that, I look it up and I see, okay, this is the frequency of visitors. I know there are fewer people here, so maybe I choose that, or Google Maps or whatever so this is on a very small scale very very almost unnoticeably our decision-making process is already influenced by by algorithms but now if I think of now the next scale is the city brain now I don't know if you know this project in Hangzhou in China because China's ahead of all countries in all this, I mean we know that. Hangzhou created a brain called the city brain which was created by Alibaba and what they do, they're doing it now, I don't have the recent information but until two years ago I was well informed about what's going on. So essentially they are getting the entire data of the 8 million city of Hangzhou and feeding it into the black box of AI, and that's feeding back their services into the city. So they have the entire... And in China, it's more accessible data of individuals. So the entire decision-making process of an entire city is driven by AI. This is what we are moving into. So that's the translation that I see of the loss of agency of an artist using AI. And I leave it to the discussion. If that's the world that we are moving into, what is our role? What is the agency of the citizens or of institutions or organisations or educational institutions? Well, from my point of view, it's the point of view of the artist. My point of view is the point of view of disrupting all the mechanism of capture of data because as you said it's a black box that we don't have access to it. We don't know the companies which are behind this capture of data. We don't know what as human beings know we trust in human beings and also we trust in technology so much. We don't know what's going on behind no so at the moment for my citizen point of view which is the citizen point of view fanart is disrupting these mechanisms as is their strategy and their tactic as you like to found at least more funny to work with. Yeah, I think something about everything we've discussed here today so far is it continually reminds us that we're always standing on shifting sand. So we're never just in a fixed moment of time where everything is static and everything stops, where everything is constantly moving parts. And something that I find quite a lot in related discussions and also discussing my own work as an artist or as a researcher, dealing with these things. And I continually encounter this desire for artists to kind of solve it or art to give, you know, kind of solutions or answers, or to kind of break it in a meaningful way that will show us a way forward, or something like that, and I, yeah, and I guess maybe I just, I end up at the same point every time, like, that maybe that's not what art needs to do or should do or can do. presentation was something that Hito Steyerl has mentioned in relation to engaging with machine learning systems and this desire to pollute them. And I think it's kind of interesting that Jake Elwes' tactic of in a way you know messing with a system by giving it useless data, but then there's, or not useless data, but the data that is not intended to be in that system, or something that will offset the bias, but I think it continually creeps in and the arguments against that that I've heard are, you know, okay, you know, the data set is unrepresentative, so we'll make it representative and now we can better commodify you and we can better track you and we can better, you and we can better capture your value. So I feel like your approach is actually really nice to think about this in terms of tactics like that these are things that shouldn't necessarily be fixed and that we can develop methodologies that are intended to change and are intended to kind of work with the flexibility and unpredictability of the context that we're dealing with as well. So I just really appreciated that very much. And I feel like it offers quite a, at least for me, it's a nice, graspable way of thinking about moving forward and working with AI systems. What you're just saying about... You identify this, how did you put it, like a desire for artists to solve these particular kinds of problems or find solutions. And what you were saying about this massive scale computation of the data, whatever that means, of 8 million people at a completely different scale. I wish Michelle was here because the last provocation she had in her video was to say to make something really small out of something really big. And she might be the perfect person to tell us some tactics for how that might work. Because that's really what that ends up being about, I guess, right? I mean, the problems that we can identify in how AI is instrumentalized in corporate contexts is very often about massive scale operations. But the most incisive and oftentimes radical interventions that we can see in AI art projects are oftentimes on a very different scale. And they're probably not designed to scale up. And maybe that's also an irrelevant desire to figure out how to scale it up. But it can nevertheless be maybe a yeah like you said like a dynamic fluid not rigid not fixed set of ideas about how to respond to a terrifying thing that you describe of a city brain which we have to prevent that from happening I mean not speaking as an urbanist maybe you have a different perspective as an urbanist but I mean one thing is if we are terrified of something out of privacy but these are all very luxurious fears the real problem is climate change or the loss of biodiversity. In the next 20 years, we're going to lose one million species. This is the report, and Australia says it's much worse. I mean, there was a report in Australia. So we are facing enormous challenges in the next 20 years. And I think AI, Hangzhou, is not going to solve the problem of the planet. It's not responding to climate change. And I personally think, very, very personally, we can discuss this, is that if we don't, if we can't, power would lie with those people who can solve problems in the future. And if it's AI that can solve problems, then it's great. If it's not solving our problems, then it'll become irrelevant. And that's why I think the tactical approach is extremely important, because it's just, I mean, innovation happens only in the center. It's just the big marketized world is there. It's going its way. It's like a huge ship with no captain on it. It's just moving in a direction. And then we have these little boats around, getting out of the big ship maybe and looking for new shores and looking for new ideas, looking for new orientation. We have new maps. We have new visions. Maybe we go to a new island, we find an island, we set up our new... That's where innovation happens, and it happens with single persons. That's most incredible. I've been looking at urban innovation in the last 20 years, and all the big innovations happen in extremely small cells. So there's a lot of... And when we say about hope, I think because we have this peculiar fear, this dystopic fear about technology, that's not really it. It's like, how are we deploying technology? Like, what is the context of deployment of technology? And the fear should be more like, are we going to solve our problems with climate change? Are we going to solve our problems with our deep psychological problems in our society? I mean, if you look at, let's say, suicide rates and things like that, all these micro-studies about psychology is incredible. Nobody is talking about that. So we really have issues. And then we can ask, are you solving AI? Are you solving your problems or not? It's extremely simple. And they are not solving our problems. Yeah, but at the same time, machine learning is making efficient systems, but at the same time it's also wasting a lot of amount of energy, no? This is the point where I can achieve, no? It's like, what we want. With this, creating this amount of technology, no? That as you mentioned, no? Complexity, no? complexity is like creating. I always like this John Maeda sentence, like simplicity is better. But technology is always creating a layer of complexity. So at the end, we are trying to solve problems that we already have already solved without this technology. Disrupting. Let me critically close this round table because we are all researchers. And I would actually close with the quotation that I have here also in the booklet from Friedrich Kittler. So in compute, and this is about like this life of a second degree because we are researchers and what are we doing? Our research is actually based on our time. It's our research also based on a second degree. But what is Kittler saying? So in computers, the mechanical objectification never transforms back into a life word sense, but at best into contemplation of life of a second degree. Scientific visualization, artificial life and so on. Universally programmable computers are so cut off from human experience that there is a danger that they would program their users as well. So we are in this life of a second degree. And we're researchers. And we're actually in danger of actually having our research of a second or even third degree. So how we are actually, what are we doing? so how we are actually what are we doing do you think your research is a research of a second degree researching from the art side is visualizing the problem it's not like solving the problem but trying to make a piece with acting as a trigger in the brain of the audience. Make this connection with the problem. Smoothie. So let's come out of this circle. Let's try not to be in this kind of degree discussion. But thank you very much for all your talks today and for all your thoughts and inspirations that you actually gave us. Tomorrow we will continue. Actually, it's a good time to explain a little bit the schedule for tomorrow. Tomorrow we have the laser talks and we talk about hybrid futures. And of course, course again in reflection to our life and we will have we will sit here under the tree we will have talks by one moment I again have to go to my booklet Azen where is she Azen Dr. Azen Karoshakin will be together with her colleague, she will be online contributing, but we will also have talks from Christa Sommerer and Fabrizio who are actually doing the Laser Talk tomorrow, Diana Eitenschenke, Christiana Kazaku from the Laser and Leonardo Network, Christoph Thun Hohenstein, so the former director of the MAC in Vienna, and who is actually leading the Biennial for Change, Monika Gagliano, Martin Pfoser, and very local people, the Leisenhof will also participate, and Gabriele Winkler. So, looking forward to the talks tomorrow, and thank you very much for your contributions today. We are not done, so we will give over the stage and the microphone and the sound experiences to Kike, Julia, and their team. And yeah, we are open until 10 o'clock in the evening. Again, we have to shut down at 10 o'clock. Very on point, very correct at the time because here is the hotel, Motel 1, and we should not disturb them after 10 o'clock. But enjoy. Now let's go to the bar, have another discussion, maybe go totally away from all these discussions that we had. Oh, I totally oversaw. Or rather two thoughts I have to all these discussions. Feeding meaningless data or destructive data might be just what the AI could need to make the picture complete. It was one of my thoughts. Understanding humans better, what are humans doing? Humans are annoying. So AI figures that out too. The second thing was you said that data is the goal of the 21st century. Gold. Gold or gold? Gold. Gold. No? God. Gold. Gold. Okay, so then I would be the alchemist trying to make gold out of shit. Anybody who wants to add something? Okay. What the fuck is critical data? Do we need data? How much data do we need for our life in future? So thank you very much for your contribution. And yeah, looking forward to see you tomorrow for our next talks. And handing over to the Sound Campus.