Diolch yn fawr iawn am wylio'r fideo. Thank you. Thank you. That's good. Hey, you can hear me. One, two, three. Thanks. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Good morning. Good morning. It's the third day of Ars Electronica. We kicked off two days ago with the pre-opening walk, the fantastic Ars Electronica festival and it's a great privilege to kick off the 12th edition of Expanded Animation Symposium Conference. So, it's a little bit confusing as this edition is totally new. It's morphing into its next frame from a symposium to a conference. It's a three-day event with a lot of presentations. We have nine panels in three days. So in total 35 presentations and it's a structure that we have invited guests from 11 o'clock to 1 o'clock. So this is quite interesting. Tomorrow with an AI panel and the last day we have the panel Art and Industry. And we will kick off the conference format today at 2 p.m. And if it's a little bit confusing and too much information, you have all the information already on the website. And here you see an overview of the panels. And it's great that we have a lot of partners. This year again, it's a lot of partners this year again. It's a bunch of partners from universities, local institutions and international associations. And one partner since a couple of years is ACIFA Austria. It's an association that we will briefly introduce. And it's great to feature in this panel artistic positions as we have entitled expanded animation. So I'm very interested in the presentations in the next panel. And I will pass over to René Biedner, a long-term collaborator and supporter of Ars Electronica. I think you are involved in Ars Electronica since almost two decades, I guess so. And he is collaborating with the CIFAR panel, I think the third time. So thank you for the collaboration and I'm looking forward to your introduction. Thank you. So I will start with telling a few words about ACIFA. So yeah, it's the third collaboration and for example, Jürgen and me, we are both ACIFA Austria members. So yeah, it's the third collaboration and for example Jürgen and me, we are both ACIFER Austria members. So ACIFER is an association, it's called Association Internationale des Films de Animation and the international ACIFER group or association was founded in 1960 in Ancy at this huge animation festival and the Austrian branch, this is why this forum is called a C for Austria forum, was founded in 1985 already, so also quite some years and I think next year Jubilee with some festivities. And of course it has a focus on artistic and experimental animated film and focus of course on Austria. And you know just to name a few things about ACIFAR Austria is they have an animation studio in Vienna in the Ponte Gasse where you can work on films, where there's discussion formats, workshops and where there's an archive. and then for example there is the Asifa Keil in Vienna which is a beautiful place to exhibit works somewhere between animation and fine arts and I have something connected to Asifa Keil, this place in Museumsquartier and yeah this is also a role of Asifa to publish sometimes Kyle, this place in Museumsquartier. And yeah, this is also a role of Asifa to publish sometimes books. And this book, Animating Art, combines 130 animators who exhibited in Asifa Kyle. So it's a new and an English version now with 130 exhibitions. So let me know if you're interested in a book. So, please take a look at the website too. Hopefully you can order it here. I didn't ask. Anyway. And as a last thing, ASEFA also organizes festivals and conferences such as Best Altran Animation Festival, such as Under the Radar Festival and conference, such as the Scanner Conference and Symposium, which is in the context of Animafest Zagreb. So I would say let's switch to... Let's go on to the first speakers which are the Slowbros, Onart Hekimoglu and Ole Tillmann and Onat is a game designer and a composer. He's based in Cologne, such as Ole. And Onat has been working on interactive projects, installations, VR experiences and games. And together with Ole, he's co-founder of the game studio Slowbros. and together with Ole, he's co-founder of the game studio Slowbros and he was director and composer of the project that they will talk about in a moment. And Ole Tillmann is illustrator living in Cologne as well, working in illustration, animation projects, games, magazines, streaming services, TV, live shows and many many more. Clients such as Sony, Netflix, Vice, Disney amongst many others and yes both are co-founders and Ole is art director and the talk or the presentation is called Making of Harold Halibut a Handmade Narrative Game. So the stage is yours. It might actually make sense, maybe if we get both mics, because he's like super silent usually. Or actually I can hold it a little far away. First of all, hi, thank you for inviting us. Reini, Jürgen, thanks for the nice introductions. My name is Onat. I'm the director and composer of Heretalivit. Right, and I'm Ole. Nice. And I worked as the art director of the game. And the first stop, we're going to show you a small trailer of what we made so you know what we're talking about darling Arna when the ship crashed I half wondered if life as we knew it would collapse in on itself as well it seemed like the pressure of the ocean surrounding us was pressing through the ship's hull and I wondered how I how any of us would find a new way to keep going on. There aren't many people I can talk to without feeling inferior. Oh, thanks. No, I didn't mean like that. You just, you're not a Jimmy Judger, you know? You seem to just accept. Hmm, I try. It could be worse, but there must be more. You seem to just accept. Hmm. I try. It could be worse. But there must be more. More to life. Is that Dr. Computer? Dr. Computer? This opens a whole box of questions. Actually, I did kind of make a mistake. We all make them, Harold. Ah! Who? What? So... We're sorry to put the burden on you, Harold. But you're the next best man for the job. I... Yes! I mean, yes. Thank you. So, Harold Talavet is a handmade narrative game about friendship and life on a spaceship stuck undersea. What do we mean with handmade? That we actually built everything you see in the game by hand first. So, that includes the sets, all the environments and backgrounds as well as the characters and puppets like this one. We then 3D scan everything to bring it into the game. And right, the 3D scanning process has been simplified here for illustrative purposes. And right, next. Okay, the mic's out. Test. Oh, it's just mine? Is it gonna come back? Is it broken? Should I set this one aside? No. Right, we're gonna go right into another video of the making of... Thank you. Kampung Kampung Okay, so the focus of this presentation is going to be a summary of how we got here, how the technology behind the game works, why we made a game in the first place, and why it took us so many years. Because everything started in 2011, when we set out to make a stop-motion point-and-click adventure set in an underwater world. I had just finished film school and told two friends of mine, Fabian and Daniel, why don't we make a game? Why stop-motion you might ask? Well, we love stop-motion films, but also none of us three could draw or do 3D stuff. This was a couple of months before Ole joined the team, so this seemed like the easier way for us, actually. So it's always a matter of perspective. We started building our first models in my kitchen. I even had hair at that time still, so it's been a while. This is Harold's room, by the way. You are going to see it in different iterations throughout this talk. And the next step was to create characters. So this was the first model that we actually built and also my application for studying game design, for masters in game design. The next step was to create characters and we quickly realized that while we were good in actually building things, we weren't artists and it was very hard for us to kind of visualize how our main character could look like. We knew we needed some kind of concept art. Which is where I came in. I grew up in the same town as Onat and Fabi and our other friends and we were connected through our sisters. And at the time I just finished art school and I was living in the US actually and I was working for Disney in Los Angeles and I was living in the US actually. And I was working for Disney in Los Angeles, so I was pretty far away. And I received this video, which I kind of loved off the bat. There's in the background, there's some like cute music playing on top of it. And they convinced me to just draw up a couple of loose sketches for their main character, Harold. And we, yeah. Yeah, this is the sketch we got from Ole, and we thought with a great sketch like this, it should be easy to build a character, and we ended up creating this abomination. Yeah, so we asked Ole if he didn't want to join the thing, the team to do all the artist related things. Which I happily did and ultimately resulted in me moving back to Cologne to join them and we dove deep into experimenting with this whole process. We're all in our early to mid-20s and just poured a lot of our free time into this project basically. And at the time we were still planning to do actual stop motion, which is why the bodies still had rigs inside them. There were foam bodies and latex and all the stuff you have to deal with with stop motion. And they didn't have mouths or eyebrows as we plan to add that in post-production later on. And this was one of the more elaborate sets we built at the time, but it's still just a static photograph that we lit with actual lamps around the set. While we didn't have the founder myth of a garage, we did spend all of our time in this is Ornott's bedroom at the time and it was kind of between that and the kitchen and just getting that all messy and full of sawdust. And yeah, we started recording stop motion footage and the idea was to have the oh, the just because people haven't seen these pictures, the lighting set up around the actual built set on the left side and then yeah, all the animating of the characters in front of green screens on the right side, frame by frame, and then we plan to add that just as a 2D layer on top of these static backgrounds. And yeah, these first things led to this, which is the first prototype of the game. So we took my master's thesis in game design as an opportunity to create the first prototype of the game and this was also the perfect moment to evaluate what we liked about it and what we didn't. The general stop motion look or style was given by that time and we liked the atmosphere of the photographed sets but you can already see in this prototype how restricting it is to have different layers basically. When you're working on a stop motion film everything is inside the same frame. You have the characters, the backgrounds, all the objects these characters interact with and so on all in the same frame so there is this sort of consistency. This kind of looked like a cheap Photoshop collage honestly. The character looks like it's slept on a static background. It doesn't feel like he's really in this environment. So what we had was a classic point-and-click adventure with a focus on puzzle solving, static lights and camera, and restricted movement. But what we wanted was a modern narrative game with a focus on story and exploration. I mean, the whole project basically started because we wanted to tell a deep and engaging story, and we had the feeling that classic point-and-click adventure puzzle solving kind of slowed down the pace of the story experience we actually wanted to create. The reason why we set out to make a point-and-click adventure in the beginning was because it was 2011 and at that time it seemed like the minimal amount of gameplay one could have without going full-on full-motion video video or so. The Raster which is one of the earlier examples of like very narrative focus like basically a walking simulator game came out in 2012 just as a reference. We also wanted to have a much more cinematic feel in general with dynamic lights, camera movements and the ability to move around freely. In short, we wanted all the possibilities, benefits and feel of modern 3D games while keeping the handmade physicality of our game assets. So we started experimenting. During this time we were still freelancing individuals and hadn't considered really founding a company or running a business of any kind. We applied for a German video game development fund and miraculously got funding to work full time on the prototype. The whole process from applying until we actually got it took more than a year because we were so unfamiliar and just students kind of trying to stumble our way through the German bureaucracy as well as having to just found a company and dealing with all that comes with it. But yeah, we could finally focus on finding a way to achieve the goals we had set out based on that initial stop-motion prototype. We did a lot of experiments, starting with projecting the photograph backgrounds onto 3D mesh representations of these sets of these environments first. And while this already gave us some depth, a certain like this parallax effect and so on, this was still too limiting as basically the lighting would have been baked in inside the photos in this case as well. We then went over normal mapped sprites for the characters and so on. A couple other failed experiments until we finally tried out 3D scanning. This was a very early test on the engine and while it was missing a lot of detail compared with the actual models, it gave us a glimpse where working with 3D scans could lead us. By the way, this was 2013 and resources about 3D scanning in general were not as widely available as they are today. And we didn't know of any games that used 3D scanning at that time. One of the earlier examples was The Vanishing of Ethan Carter, which was 2014, so a year after that. And at the time, it was mostly used for geological or architectural scans of, you know, for science and all that stuff, and not so much actually in the creative industry. These experiments were very promising, so we created our final workflow based on that, which is, by the way, the same process that we used until we released the game. And it starts with photogrammetry. Here's a very small intro for those of you who don't know how that works. So basically we use a regular photo camera and a turntable to take hundreds of photos of our model from different directions. It's important to light the objects as flat as possible during this process, so in contrary to what you would do creatively. Because ideally we don't want to have any lighting information in our resulting models, so that we can light everything later in the game engine, which is Unity in our case. The photogrammetry software then analyzes or matches, tries to match common points between all those photos and creates a 3D model out of these. You can see the result here. Oh, sorry. Okay, you can really see the result here. Oh, sorry. Okay, you can really see the result here. We used RealityCapture, by the way, as our photogrammetry software. The little white squares you see here are all the different photos we took. So by analyzing the common points, the software can actually calculate or kind of recreate the actual positions where the photos were taken. The process unfortunately doesn't end there as we get a very high poly model through that process which is somewhere around 10 to 100 million polygons and that's not suitable for games at all and needs a lot of manual cleanup. The next steps are basically manually recreating the model to have a nice and clean topology and baking all the detail information from the original model to the new low-poly model in form of normal maps. And we also baked the base color, so the color texture of the models. We then have to manually clean up and adjust the materials in Substance Painter to get a physically correct material in the end that works in any lighting environment. Furthermore, we were really lucky at the time to have access to never released tech, actually also tech that probably will never release because the company doesn't exist anymore, but friends of us created that in the Kelowna University and that was a material scanner that could capture all the textures you need for physically based materials in one go. And this is an example of these material scans. So what you're seeing here is basically just a plane, a quad. This is the floor of Harold's room again. And all the details and depth that you see here and the roughness, all the physical properties are basically just the detailed textures. We could only use the material scanner for fl flatish objects like walls and floors though, so everything else like our character mod materials had to be authored manually. And this is another example of us building another kind of floor, just to exemplify the whole process of the amount of detail that kind of went into it from the experimentation all the way to having it built into the game engine and also to let you appreciate how little you actually end up seeing of this floor in the game which we do feel contributes greatly to the overall look of it though, that everything is very granular and detailed. like everything is very granular and detailed. We also applied the photogrammetry based process to our character models and the animation workflow from this point on. So meet Chris Tennerbaum, who is a school teacher on board of the ship, who is at this point no longer got like a stop motion rig inside his body, it's just rigid clay statues basically. And the movable parts are like the arms or the heads at best because we have to dress them because the costumes are still textiles, like sewn textiles, and you have to, you know, dress them as you would an actual human. textiles and you have to, you know, dress them as you would an actual human. The resulting models can then be digitally rigged because at this point now they're just regular 3D models really. And this makes it possible to use motion capturing to animate our characters. We tried many different systems over time, but especially the development of sensor-based systems was helpful as it made motion capturing more affordable and especially usable in small rooms as well. And yeah, I'm actually playing all the characters in the game because of cost reasons and to speed up the process because it made it really easy like that. So we used the tripods as stand-ins. And honestly, the beautiful picture of Harold here is only for this video's purposes. We usually didn't even have pictures like that, but just bare tripods. So even in dialogues where multiple characters are acting together, we basically recorded them sequentially by using the audio files, the voiceover, basically, as a timing reference. Pretty similar, actually, how we would work in a classical animation film workflow. But it enabled us to... It's not like I'm one of the ship's foremost experts in molecular sciences. So, what is your procedure idea exactly? So, there are a couple of stages to it. The first stage is all about our current stasis. As we all know, the ship's weight and the fact certain sections are full of water gives us a stable buoyancy, and because of the tide, we're in a very slow and long orbit. So. Um, yeah. So, this was an example for that process side by side. One of our biggest secret sauce ingredients besides that is a team made up of people outside of games. So we ended up roping in a pretty interdisciplinary crew of experts from different areas like film, illustration, but also a carpentry and architecture and apparel, which we feel contributed greatly to what the end result was, basically. Also, of course, by the end, we had people from Games and Joy on the team. But yeah, our costume, so back to Chris, because we like to talk about our costume designer, Holle, as an example of an added layer of storytelling, because usually costume design is up to character designers and a lot of teams and games. But of course, having an additional mind to think about this layer just ends up benefiting the storytelling. So using the newly developed workflow and all our findings, we created a vertical slice of the game, which you can see here, it's Harold's room again, but this time with our main character nicely integrated into the environment. So this was 2016. We had finally reached a stage that gave a good impression of the game we were going to create. We were finally ready to show it to the public for the first time, so we released our very first trailer in 2016. Here's a little snippet of it. I disabled the audio so I can talk over it. As you can see, the first vertical slice was already quite close to the current state of the game. Things like lighting on the technological side improved, but the general individual bits and pieces were kind of there. And then eight years later, we finally released the game. So why did it take so long? Right. The impression people get a lot of times is because of the handmade aspect of it because we're building everything bit by bit and that takes a lot of time but that's actually kind of a misconception because the level of detail and just kind of quality of the graphics would be pretty comparable in any regular 3D digital only process basically. Yeah, in the end it's basically a sum of three factors. So it was our first game and when we started the journey we didn't know anything about how to make games so that was one big part of the process. We learned a lot of things on the way. And we also had to develop develop a workflow for something that in this way didn't exist before. The second thing was it was a rather ambitious project for our team size. I mean, beyond the visuals and quality, it's also quite a long game with a play time of 12 to 18 hours and nine hours of fully motion captured and voiced dialogue. And the most important thing actually is we oftentimes didn't have funding for longer periods. So basically if we did the same project today with proper funding it would probably take somewhere around four to five years with the same team instead of the 13 it took ultimately. Some additional details about the funding are maybe that we started out with a government film fund in North Rhine-Westphalia in Germany, which is amazingly helpful. Went over to working with a publisher for some time, but had to part ways along the way. And ultimately the Game Pass deal that we made with Microsoft towards the end of development enabled us to really realize the project and bring it to the end the way we want it to, which is really important because a lot of the development was also just influenced by the stress of having to find funding and dealing with funding and then this was a great way of like not having a lot of external influence or companies telling us to like do something more logical when we're already you know nine years into developing this slightly crazy project. Right and thankfully ultimately the the reception was really kind and nice and beautiful and yeah we wanted to share some of it in the trailer we made for that. Kavita Kavita Kavita Thank you. Thank you. So maybe we have to switch. Thanks a lot for this great presentation. Is there already... There must be questions from the audience already. Yes, Alex, I think you get a mic from the fantastic team. Hello, thanks a lot for your great presentation. It's an awesome piece of art, I would say. Thank you. After all this long time, are you willing to go on in different projects or is it done now and we never see you again? No, in fact, we are going to do exactly that and especially also using the workflow that we created because Ola briefly mentioned it we compared it now because over time obviously we kind of perfected and got more efficient in the way we are doing it and first of all our entire team works like that so we couldn't even work in a different way but even comparing it with more traditional 3D based workflows it's very comparable in terms of the time it takes, but looks totally unique for us. And so, yeah, that's already basically decided that whatever we do, it's going to be in a similar style. We are likely going to explore other game genres, and also we probably are going to do something like smaller in scale in between at least. But yeah, definitely are going to go on with other games as well. But also other media as well, so we are not strictly limited to just stay in games. Thankfully it was like socially also a really nice experience. We're all still friends and survived the time. I think considering that we also run a business together as friends, we had so many people kind of question this project over time to some degree, you know, reasonably, of course, but but yeah, being here now, I think that's worth mentioning that we've like made it here and are happy to go on both as a business, as friends, and as like a creative endeavor basically. At which stage of the development, back here, sorry, yeah, thank you for the presentation, at which stage of the development, back here, sorry. Yeah, thank you for the presentation. At which stage of the development was the story fully developed? Did you all write it yourself or did you have external writers? And if so, how and where did you find them? So, yeah, good question. We developed the story pretty early on, at least like a big outline that was important to have like one big source pool that we figured out and that then controlled everything else that happened from there on out. the four people that founded the company and then we decided fairly early on though like after we had set that initial outline that we needed like a native speaker because we knew we wanted it to be in English and we then coincidentally found someone at I can't remember what EGX in London it was the two of you but yeah just someone that like found someone at, I can't remember what. EGX. EGX in London. It was the two of you. But yeah, just someone that snuck up on us and kind of offered their services and ended up really just like, hey, do you need a writer? Yeah, but it was so crazy that that ended up being the perfect match because we then got to know Danny Waits, and his name, really well and worked with him, especially on developing the dialogue. match because we then got to know Danny Waits and is his name really well and worked with him especially on developing the dialogue and just we would go over our plot outline and eventual like kind of details down in the more detailed story structure kind of and then introduce him to each like oh in the scene these characters are present and then he really helped us both with the dialogue and honing in on like the personalities and yeah, just really getting the characters one step over the line that we had brought it to kind of just also really nice. And how did you find the voice actors? Pretty tedious you find the voice actors? A pretty tedious process on Twitter mostly of just asking people who wanted to participate and then going through an actual casting process bit by bit. Fabi, who's another one of our co-founders and who's sitting in the audience with us which is important because I generally want to encourage everyone to come up to us and ask us questions afterwards because obviously it's a lot of information that we're trying to cook down into this one presentation. But yeah, it was then just a casting process, having people talk and try out these characters that we at this point imagined for quite some time, and in our heads they were like pretty like concrete people so we knew pretty well like if the voice was there or not. Last question from my side I guess, what about did you use AI at any point during the development? No we actually, a little note to the voice acting, actually there was one little exception, which is Harold, our main character, because that was, like a lot of things, actually a very lucky coincidence, because when we were working on the 2.16 trailer, we needed a voiceover at least for our first, you know, our main character. And it was this, like, a friend of mine knew someone who is an actor and he could speak a couple of lines as like placeholder thing but he became the real Harold then so yeah that was basically he came to us kind of coincidentally and yeah AI we didn't use any AI during the process again we were quite far in the process already when the AI thing came up. But then again, we actually did some experiments because we wanted to try, especially with upscaling and so on. But even that oftentimes looked too artificial in our eyes. And yeah, so it was kind of like the contrary thing to what we are doing basically, which is like going through the slow and analog process of things. Yeah. So we thought it might have potential in like more the tool side, basically, like I said, like upscaling, for example, cleanups and cleaning up stuff, denoising photos maybe or so. But yeah, by the time we were testing these things, they still weren't as advanced as we had hoped. So Irina, maybe last question. No, because it's connected to the voiceover topic. So you told us that you were acting all the characters, which is amazing. But have you recorded the voiceovers before the acting? Were you acting to the voiceovers before the acting? Were you acting to the voiceovers and how was this experience for you like to keep the timing if it was the case? Yeah it was exactly like that I briefly mentioned it but it was like very short that's that's obviously essential to the process it wouldn't have worked otherwise so we basically have not only had recorded the voiceovers but also had edited the dialogues already with their perfect timing so we had like these completed dialogues with but just without the visuals and so that means we could actually use the voiceovers and that the dialogues as a timing template to actually act act out the characters and it was especially even more important, you know, it's obviously important to react accordingly to what is there in the voice, but it was also the perfect timing cue, for example, when multiple characters are going through the same corridor simultaneously. So I, you know, we put markers on the floor, but we still had to know, like, okay, I have to start at marker A, go to marker B, but start going with that keyword. So in the end when we put all of those animations together, they actually walk one after another and not like on top of each other or stuff like that. Yeah. And maybe connected to this, how you manage the lip sync with these voices? Yeah, so that was a completely separate process actually. We did a lot of experiments there as well from like facial capture to what we ended up with in the end is a tool called FaceFX which analyzes the voice audio and creates or drives blend shapes that we created before. So blend shapes basically like with other 3D animation, but we did a little, again, it was a careful balancing between what we want to mimic stop motion as much as possible, but what fits better in the game world, like the animations themselves basically. So walking around we felt made more sense to when it's fluid because it worked better as a game. For the facial animations, for example, we limited the animation to key poses and skipped blending between the blend shapes so that we had a more stop-motion looking facial animation. But then again, we had smooth eye movement, for example, because that felt more fitting for you know the depth of the story and so on and it wasn't as jarring to have these like jumps so yeah it was always a careful balancing and yeah facial animation was basically driven by the audio. Thank you. So I would also suggest to move on. Thanks a lot for this beautiful project. You know, I'm quite sure there's a dark and cold winter coming by Harold Haley, but you have I think 10 hours of gameplay sort of? 12 to 18. So, yes. You will have some time in winter for that. It's on Steam. So, and I would go, I would like to welcome Irina Rubina as our second presenter. She's an animation director, producer, and animator who is based in Stuttgart. And she has a company called Iraro Films and she's creating on the one hand short films, animated short films, music videos, but also hybrid and collaborative projects in between animation films, stage performance, music and dance. She was at Film Akademie Baden-Württemberg and also in Goblin at the L'Ecole de l'Image and she has a PhD for animation directing from Konrad Wolf Film University of Babelsberg and last but not least she's also a mentor and lecturer and curator at various institutions and she is a board member of ACIFA Germany. Maybe you still remember I tried to explain ACIFA at the beginning. So welcome Irina. This was still my job to introduce to the topic of the talk, Animation and Mathematics, Partners in Crime, or How Curves, Numbers and Formulas Guide My Creative Process. Sorry. Thank you for introducing the topic and welcome to my talk. Today, I will try to show you how mathematics influence my work and my creative process. But before I start with this, I would like to ask you a question. How was your relationship with mathematics at school? So please raise your hands who hated mathematics first and were glad to quit the school and never did something connected to maths again. Yay! Honest, thank you! And are there somebody who loves mathematics and maybe was explaining all the tasks to classmates or everyone was copying your homework during the school times? Yeah, okay, that's a good balance. So as you can imagine, I was belonging to the second group of people. And how it influenced my life later. Art and mathematics were two subjects that I selected in high school as my advanced courses. So after the school, I went to the art academy to study photography, and it turned out that art was, or the way it was lived in this school, was too subjective for me. My fellow students and professors were discussing art concepts endlessly, and I needed fewer words and more practical work. So I decided to go away and to study mathematics, because for me at this moment, I was searching for objective beauty in mathematics. As art is, mathematics is something that human kind invented from zero. It's an artificial construct. It's a language that doesn't exist on the street. So with mathematics, we can describe maybe the nature, but we cannot find it. maybe the nature, but we cannot find it. So, theoretically, it's a nice idea if you have a theory, there is only the proof of the theory can be either right or wrong. There is nothing in between, so no subjective opinion. Maybe there are several proofs to a theory, but it's either right or wrong. But after doing this for several semesters, I noticed I miss art, and I miss human beings, and I miss social topics. So those two disappointments brought me to animation. disappointments brought me to animation. And I still keep connecting those two subjects in my work. Here you see optical illusions, an impossible triangle by Penrose and a painting by Escher. And I think film and animation are also illusions. We create characters, stories, and people want to be deceived by those stories. They want to forget that it's fictional. They want to dive into this magic and it's real for a moment. So, for me, when I started creating abstract animation, I also wanted to use those abstract storytelling in my works to tell stories through microgeometric, to create illusion that bypass conscious of people and go directly into subconscious. And at this point, I discovered those two pioneers, Oskar Fischinger and Walter Rüthmann, the inventors of absolute film. They were inventing it in the last century, and absolute film for them was a film that works without actors, without script, without words, without a camera. This film should use the basic forms of expressions, colors, rhythms, sounds. So, those are films that was created and based on this research I decided to make a jazz orgy, a film that was shown here at Ars Electronica a long time ago. And here I connected a lot of arts that I loved, Dance performance, jazz music and constructivism. And I want you to show it. It's only one minute long. Thank you. guitar solo So there were a lot of small illusions happening in this film. I will show you a few details. It's probably hard to grasp when you see it for the first time. I was taking 2D animations and adding three-dimensional space to them. We are jumping into a ball and suddenly the camera goes through the tunnel. So to mix those strict geometric forms with organic movements was important for me to create some breaks and disruptions, and to use those disruptions to build the dramaturgy, the abstract dramaturgy of this piece. The next project I would like to show you is a music video for Miles Davis interpretations of Tina Turner's hit What's Love Got To Do With It. So probably a lot of you know Miles Davis, a famous composer and musician, but which is not so well known that he was also a painter. For him, painting was more a therapy experience. He was practicing it through the darkest period of his life. And the idea of this music video was to create something based on his own graphical work. Of course, with my interpretation, with different colors, with slightly different style. So those two pictures, it's a cover album from Star People and a painting with no title from him, are the base for the music video. I cannot show you the whole video, but I will show you the excerpt of it. So maybe first about the challenges of this production. It was important for me to deal respectfully with the visuals of Miles Davis, of course, and to keep his love for detail in the animation on the one side. On the other side, we had six weeks of production, which is amazingly quick for animation. So the idea to work with a system of loops came also out of urge. We were a team of eight people and still it was quite a challenge to make a four minutes music video with this detailed style. This piece has three musical and visual parts. There are always verse, transition, and chorus, and the repetition of it. And each part consists of loops. For the verse, we have... We'll show you parallel animation breakdown so you can see also out of which details the whole scene is built up. So there is a loop for the road, for the waves. There is a loop for plants. There is a loop for a character. And with each musical bar, a new body element of this character is added. So it appears slowly, part by part. And only in the end we can recognise all aspects of this crazy work cycle. Let's call it a work cycle. For the chorus, there are actually the first two chorus loops are excerpts from the larger loop. So now you can see that it's all first two are parts of the bigger picture. And here we had approximately 22 characters interacting with each other. And it was the most funny brainstorm session with one of amazing animator Michelle Brandt, how we were giving names to those characters and talking about their background stories and why they are doing what they are doing and what is the relationship between them. So the problem and the obstacle of this production was also that jazz music is fluid and the tempo of this music, there is a variation in it and that's the reason why we love this music. But for the mathematically correct loops, which always have the precise amount of frames, it's hard to fit. So my solution was to do a multi-editing process. I was editing each loop to the bar of the music, meaning if the musical bar is a bit longer, I would cut out several frames from the loop, add several frames to the loop. If it's shorter, cut out several frames from the loop, add several frames to the loop if it's short, I cut out those. So, this, in this way we could create interplay between those strict mathematical structures with organic jazz music. And now I'm coming to the last project I would like to with organic jazz music. And now I'm coming to the last project I would like to present to you. It's called Ossia and it's an unfinished short film. That's why it's a work in progress. The film is inspired by a Russian-Jewish poet, Ossip Mandelstam, who was killed during the Stalinist repression for a poem. It's also inspired by his wife, Nadezhda, who learned his poems by heart and kept them in her memory till they would be published decades later. So the relationship between repressive state and society, accepting the state and creative individual who doesn't fit it were the question that brought me to this project. Yeah, no need to say that it's not imaginable in modern fascist Russia that has invaded a neighborhood country today that those poems would be published. So also for this film I worked with a sophisticated loop system. It consists of 150 scenes. And I'll also show you a little bit of animation process behind me. And each scene, each loop is one, one and a half or two seconds long. They are seamlessly animated into each other and this should create a meditative, immersive effect where everything is repeated but the audience find itself kind of passive and overwhelmed while the world is changing around. So also for this loop system, I had precisely counted the number of repetitions and for each loop and the necessary frames for the transitions to keep the rhythm. And those structure I built into an animatic which was a great basic for collaboration with the composer and the sound designer, allowing them to use the visual rhythm for the audio design as well, or for the music. So for this project I also plan to develop an AI system based on all the paintings that we painted for this film through the last years. And if somebody here is interested in AI and has experience with this kind of AI, I would be very curious to exchange. Please come to me later or in the next days. And in the end, I want to mention that of course I showed you just a few mathematical tools which are important for my creative process today. I could have talked about curves in the animation movement, I could have talked about simulation, programming and vectors, but this particular aspect is something that stays with me through the project and influences my process as an animation director. Thank you so much. Thanks a lot, Irina. Maybe a first question from my side. So you work a lot with music and it's a very, it's very close to music your projects as it seems. And music and for example jazz, I think jazz orgy it was called the first project that you showed. I would say chess is a very spontaneous thing. Animation is not necessarily always a spontaneous thing. How do you deal with, I mean is it nowadays, I mean you have a career already, you know 2015 is already 10 years ago. Is it nowadays, and you were speaking at the end about AI or an AI system, is it now already for you easier to stay somehow spontaneous in your work? Because it's so detailed. I think it's an illusion that music or jazz should be spontaneous. Also, the very first piece, this one-minute piece, was a result of a collaboration with the Berlin jazz band Blowfish and we were talking a lot before this piece was composed and recorded. So there was already the structure, there were parts in this, like I drew a dramaturgy which I gave to the band and they, the composer of the band did a sketch first and we talked about the sketch and only on the last stage there was a recording. So I think some of it is illusion. So even in those first projects, it was very planned and very... Of course, nothing always goes as planned and there are always problems that you need to solve but I always worked with those structures, I would say. And of course I love mistakes and I love imperfections, that's why it's all analog painted, of course. But there are other ways I bring those imperfections into my work. Questions from the audience? Yes, microphone please. Hey, I was just wondering how much you would consider math like an intuitive and especially like an inspirational thing for you and your process versus just kind of like a tool, especially in animation where it's like a lot of times you kind of have to pull out a calculator and be like, well, I have so many musical bars. So I'm going to have to fit this amount of frames in there and all that. So it's like a very obvious, just like a cool tool kind of versus like you sitting at the desk, you know, thinking about math. Maybe it's important to add to the question that I'm very much like math naive, so I don't really know how math intuitively works in the first place. But yeah, that's what I'm curious about. I think for me it's very inspiring and that's what I was trying to show you through this talk that this structure really guides me and helps me with concepts, with coming up with the skeleton for the project. So it's not, of course it's also a tool and sometimes you need to calculate, okay, you know, for this huge last loop which I needed to calculate, okay, if I want to use this part of it and it should be HD resolution, how big, of course it's also math, that's a tool. But to came up with those ideas, I think it's also inspiring to have in my head and yeah, pulling ideas out of it. Do you, sorry, I'll just add another little bit to that. Do you keep up with math? Do you read the math news? Of course, also everything I was showing you, it's kind of really basic mathematics ideas. I never finished my math studies and I'm here as a filmmaker, so it also shows that I don't have a hidden second life as a mathematician. Thanks. Yeah, thanks for your presentation. Something I'm wondering as an animator is how to structure things, and for me it's especially music that helps a lot to combine and structure stuff. But it seems in your case, it's the other way around. Or do you also follow the line of music and the pulse it gives and all this stuff? So I'm not sure which is your way. Is it one or the other or both? It's a very nice question. I think there is an exchange between those ways. And it depends on the project a lot. So for the first project, I was initiated of this project that I asked the band to record something. And the way I imagined the dramaturgy on the one side. On the other side, I was also during the recording sessions there and we talked about it and then I had this recording. And of course, something new appeared out of it and there were details that I couldn't imagine before that came to it. So it was a ping-pong idea. For the Miles Davis music video, it was a commissioned work. And Miles Davis is dead for a long time, and it was produced in 2022, so I had this piece and was trying to make the best out of it. And for the last project, it was kind... Yeah, this idea of loops and scenes that are merging into each other was already there. And also all the calculations of those separate loops and how they build on each other. So this was something the composer got from me and she was developing her music piece on it. So I would say it depends on the project but I also like going back and forward with this exchange. Also with each musician I work with we talk a lot, there is a lot of exchange there. I would have a very small question and then hand over the microphone. Could you talk, if you're interested, about the technical aspect of your work? What do you use and how do you go into the work? Surely, it's classical. Part of it is classical 2D animation. For this I use TV paint and After Effects for compositing, but sometimes there are 3D elements that would... where I work with other colleagues on it in Blender or Maya, whatever. So as a base I Think they are the main tools and of course for analog workflow. There is dragon frame, which is a stop-motion software Everybody is using does it answer the technical part of your question? Okay Yes, thank you for this lovely presentation. I would like to ask, like, I was very interested in how did you find this story? Because one point is this, it's very interesting how you tell the story two loops, but what was this kind of starting point for this story? For which? Ocia, the last one. Ocia, the last one. It's very personal of course, it's a very personal story. I'm originally coming from Russia, so watching what is happening to the cartontary of my origin even before the full-scale invasion was challenging. So this project I started 2018-19, the first ideas. So it was already there, the question of how you can speak in the state which is getting more and more repressive. And then it's not a classical biopic story. It's inspired by poetry. It's inspired by the story of a state killing an artist. Yeah, I would say it's also connection to... actually this poem, this particular poem was something that my dad would tell me, recite me and that I had it in my back mind and then on some point first of first I wanted to do something only with this poem and then I realized okay the story of this couple the story of this poet is so touching for me so I need to take it into this short film and then it kept developing. Maybe the last question. Thank you so much for the nice presentation. I just have one question about the AI you mentioned at the end. Did you use any form of AI in the last film that you showed or can you please explain about how you use AI in your work? Not yet, but I hope to do so. So for this project, I worked with a huge team of people, and we were developing animation and all the painting. It's painted in a quite sophisticated way in layers and then built together. And now we ended up with not finished film and a lot of paintings. So my idea was to create our own AI system out of combination of the outline to de-animation and all the paintings we have done so far. And to see if we can... Even several several eyes because it's painted in layers. There are layers of acrylic, there are layers of ink, there are layers of gel pen. And to see how the system can copy our style if we also help and paint some style frames for each of the scenes. I think it's a longer conversation, maybe I can explain it more detailed later. So thank you a lot, Irina. Thank you. So and I'd like to go on to our last speaker in this morning session who is Dr. Paul Clark who is an artist and associate professor of performance and creative technologies at University of Bristol and he's the co-director of the Centre for Creative Technologies. He established Bristol's MA Immersive Arts, Virtual and Augmented Realities and he is a partner on Immersive Arts, a UK-wide project supporting artists to explore the creative potential of virtual, augmented and mixed reality technologies. And alongside his academic roles, Paul is also co-artistic director of the group or the company Uninvited Guests, with which he explores the use of creative technologies in participatory and place-based performances. And this was founded already in 1998 and has shown in many, many countries internationally. So, and Paul's presentation is called Participatory Futuring Through Augmented Reality and Performance. Thank you very much. It was a long enough introduction to cover my microphone changeover, but not plugging the laptop in. So here we go. Okay, good. So thank you very much, Rani, for that introduction. And thanks also for the invitation to present in this context. So, I'm going to discuss Uninvited Guests Future Places Toolkit, I'm going to discuss Uninvited Guests Future Places Toolkit, which is an augmented reality engagement activity for planning consultation and neighborhood visioning. Through this example, I want to consider how AR, science fiction storytelling, and participatory performance can support communities to imagine preferable futures for their places and explore their affective relations to these. In the context of this expanded animation conference, the software tools were created in Unity, conventionally employed as a 3D development engine, which you will well know, for games, interactive and immersive experience design, which in this example is used for a civic purpose. And with the theme of this year's Ars Electronica being hope, it's worth noting our interest in Ernst Bloch's critical hope and active utopianism as strategies. Firstly, a bit of background. Future Places Toolkit applies an approach developed for the AR performance, Billenium, a collaboration between the performance company I co-direct, Uninvited Guests, and sound artist Duncan Speakman. Rather than a historical guided tour, Billenium is a guided tour of the future of your place, on which we tell site-specific science fiction stories. Future architecture appears around you as a 360-degree AR animation, and you hear spatialized soundscapes of possible utopian and dystopian times to come. The tour concludes with an opportunity to design tomorrow's city together, and you see the speculative architecture you describe appear drawn in real time and layered onto the buildings of today. Live streamed multi-channel audio also immerses you in the location sounds of the sci-fi you collectively imagine. First commissioned by Bristol's Watershed and the Smart Internet Lab, Billenium has since shown at Stripe, Festival of Art and Technology in Eindhoven, and in Bilbao, Budapest and Belgrade, as part of Uninvited Guests' Perform Europe-funded Performing Futures Tour. Coming from Bristol, clearly one of the criteria for our partners was that they had to be based in cities beginning with B. The footage you've been watching is of this. So in each of these places, people who participated suggested approaches from Millennium could be applied in urban planning. approaches from Bellenium could be applied in urban planning. Planning consultation can be pretty dry and tends to look a bit like this. Some boards on the walls of a local hall. Consultation tends to take place away from the site being developed and is assumed to have little impact. It's hard for people to visualize or imagine what plans would look like in their context when they're represented in 2D away from where they're going to be built. Wouldn't it be good if people's reactions to planning consultation were a little more like this. So this is a problem for architects, developers and council planners. How can you engage people with consultation and people who are more representative of local communities, including young people? How can you get residents involved in shaping better places for their neighbourhoods, in improving plans for developments in their places, plans that are better aligned with their wants and needs, which they can buy into and support going forward. Future Places Toolkit responds to this challenge by exploring whether creative methods and immersive technologies, science fiction storytelling and augmented reality can get a wider range of people involved in discussing plans and whether these conversations are more meaningful and constructive in the site that is being developed. The aim is to place citizens at the center of the planning process and enable better contextual decision making. People can participate in early visioning, feeding into development briefs and co-designing plans. And architects can share their designs sooner and get better, more constructive feedback. The idea is that the toolkit has both civic and commercial purpose, that it will be of value to communities and to architects, planners and developers. Future Places Toolkit uses AR on smartphones to enable participants to visualize what's planned and co-create ideas for improvements to their neighborhoods. Uninvited guests facilitators take participants on a short walk and invite them to imagine that they are also traveling through time, taking an imaginative journey into a preferred future for their place. As people describe what they want to see there in, say, 2053, the ideas appear on their screens, drawn live in the AR scene by an artist who's listening in remotely from our studio. They get to discuss and explore what the changes would be like in situ, overlaid onto the place where the development is planned. As with Billenium, we also use immersive sound to scaffold participants' imaginations, with sound designer Duncan Speakman drawing on a library of location sounds and effects to conjure up atmospheres of the place in future. The images I've shown were from running the demonstrator of Future Places Toolkit for Bristol City Council as part of their full Philwood Broadway public realm consultation in Noel West. We're running further engagement there now around scenarios for regenerating this high street. That's with Architecture 00. This video is from Birmingham Settlements' Neighbourhood Futures Festival, engaging communities with plans for their new Nature and Wellbeing Centre. That was with colleagues from the Centre for Sociodigital Futures. So in this context, I thought I'd say something about the shifts in the AR tech development. As you've seen, the drawings were perspectival, but 2D. Initially, for the performance, Billenium, we used image markers, and the artist was drawing onto a still photo of the site using a graphics tablet, so the image only aligned effectively with the real world from one perspective. We shifted to using cloud anchors and a spherical image of the location for the artist to sketch onto, so participants can look in all directions and have three degrees of freedom. Then we added the function of placing and drawing onto camera facing billboards and also spawning pre-made 3D objects from a library. Last year we received R&D funding from the UK's Digital Catapult and My World from their Tools for the Creative Industries Challenge call. That enabled us to work with Bristol-based AR and VR studio Zuber to rebuild the Future Places software. So the artist is now sketching or sculpting live in VR, and their 3D drawings appear in AR for the group on location, with their networked phones acting as portals, or as we say in our science fiction frame, devices that enable you to see into the future of your place. We're now using Google's geospatial creator in Unity, which uses ARCore and 3D tiles from their Maps platform and Street View scans, so the artist can enter the coordinates and see the geometry of almost any location. They can bring the real world into their editor and fly through a photorealistic 3D representation of the space, as you would on Google Earth. So now their 3D sketches can be located or anchored in the real-world location, and participants can walk around them and get a better sense of what they or the changes to a place would be like. We still retain the feel of live sketching or illustration, although in a way, as I said, it's now sculpting. The style is hand-drawn, so it doesn't seem as finished or finalized as architectural models can, but open to discussion. They can combine pre-made 3D drawings from the library with using different sketching tools in response to the speculative conversation. Bruce Sterling has suggested that virtual environments could be described as interactive design fictions, virtual sandboxes for democratic participatory ways of co-creating speculative narratives, and what Harmut Könitz calls playful utopias, safe spaces in which to simulate and play as if, try out different scenarios which can fail without actual consequences. I want to explore these ideas in relation to Future Places Toolkit, and in this case, collaboratively augmenting real environments as a way of co-designing preferred futures and place-based science fictions. In reference to Braque's Wer Verfremdungseffekt, Dacus Suvin describes the cognitive estrangement of science fiction, which can defamiliarize our observed environment in a way that functions dialectically. Similarly, in their book Speculative Everything, Design, Fiction, and Social Dreaming, Dunn and Raby proposed that speculative design and sci-fi scenarios are aids for critical reflection. In this case, on the present place, its current architecture and existing plans. The distancing effect of sci-fi also relates to Adam Greenfield's take on augmented reality, which differs from other definitions of or aspirations for AR as a seamless mix of physical and digital. He describes AR as superimposing a location-specific graphic overlay onto the visual field. It's in this way that Future Places Toolkit enables participants to see their speculative ideas layered over existing buildings in the site being redeveloped. Rather than conventional uses of AR overlays to provide pragmatic information like directions or historical facts about the locale, Future Places instead presents people with visualizations of the alternative future realities that they describe, though maybe not the alternative reality inhabited by monsters of various types that you may be familiar with from playing Pokemon Go. Instead of theorizing AR as blended reality, Greenfield discusses the conceptual shear between the physical world and the realm overlaid onto it. With Future Places Toolkit, the contemporary place remains visible to participants through and alongside the AR drawings of the future on the screens of their mobile devices. They're conscious of the shear between the physical environment they're in and the virtual layer that augments it. Perceiving the gap between everyday social reality and the hopeful science fictions they've dreamt up or designed together enables participants to measure life as it is in their place against how it could be. It is in between the reality of the neighborhood now as is and the utopian as if of the preferred augmented reality futures that critical comparison can take place. Whereas with VR, there's an attempt to enable you to be there, to feel present in the virtual world, or to be within the image. With AR, virtual objects or simulations seem present in an actual place. With digital objects brought into the real world and located here, for instance, on a city street, or here in a square in Bath. A haunted public space if you like, but in this instance not ghosted by pasts but by possible futures. Observing people interacting with Future Places Toolkit, I saw them moving around a 3D augmented reality planter as if it were present, circling a virtual person or avoiding stepping into a fountain that's not yet there. People also showed one another around, giving guided tours of the speculative public realm that they'd co-designed. Perhaps you've experienced pseudo-haptic sensations from walking into an AR object, or synesthesia, when the visual perception of a digital object produces a feeling like touch. Brian Masumi writes of bodies with the ability to affect and affected bodies. These digital objects affect participants' bodies in a way that is performative. They do something to us and make us move, and where we are or how we move affects them. Or as Karen Barad, in Karen Barad's terms, we might say the virtual objects are agentially real as the 3D sketches have the capacity to act on the participants they encounter, have the agency to change their patterns of behavior in the public realm. In this hybrid physical-digital space, people's actual movements were choreographed by the AR architecture that's only virtually there. Using live 3D drawing and spatial audio, Future Places Toolkit tries to materialize virtual possibilities for places and enable participants to explore them physically ac audio amgylcheddol, y Cynllun ymddiriedol yw ceisio ddatblygu posibilitiadau wirfoddol ar gyfer lleoedd a chael ymddiriedolwyr i ymchwil eu bod yn eu hysbysebu yn ffordd ymddygiadol. Mae'r ddychmyn yw bod ein hymwysedd ymddygiadol yn gwneud i leoedd ymddiriedol yn fwy tangib, yn rhoi sylw i bobl o beth y gallan nhw fod yn ei fyw, yn eu gallu gwneud yn ymwneud â'u posib arnynt, yn ymwybodol gan ymddygiadau ymddygiadol a chael ymddiriedol. to live in, enable them to decide their position on them, informed by embodied experience and sense-making. As well as social ideation, visualising ideas together conceptually and reflecting critically on their present place, participants get to test out how they feel about the futures they dream up, and to do so physically. Bearing in mind the agential realism of AR, the realness of these digital objects that act on them and make them act, we hope to enable participants to explore embodied and affective relationships with possible futures, to negotiate them spatially. A community member at Birmingham Neighbourhood Futures Festival said, I can go to the vision. It's very powerful. Dream. I can dream. And another participant said, When you picture something, some imagined architecture in your head, it stays in your head. But when you see it being drawn, it makes you feel like it's coming alive and it's a real possibility. And you engage with it in a very different way than if it's just in your imagination. The images you've been seeing and this visual report are from engagement we've been doing with Bristol City Council around the Temple Quarter redevelopment by Temple Meads Station in Bristol, which I think is one of the largest regeneration projects in the UK. Our aspiration with Future Places Toolkit is that AR and located science fiction storytelling can, as Dunne and Raby say, help people participate more actively as citizens in co-creating more socially constructive imaginary futures for their neighbourhoods. Not singular futures, but co-designed, pluriversal and inclusive futures in which diverse world views, needs and lived experiences can coexist and thrive. People, particularly from communities on the margins, are often excluded from democratic decision-making and planning processes concerning the future of their places, and from both writing and featuring in mainstream science fiction. Perhaps immersive means can scaffold collective or social imagination, offer people who are representative of local neighborhoods the agency to narrate themselves into times to come and to see themselves in their preferred future scenarios. So in the time I have, I will end with the questions that drive this project rather than conclusions. Can participatory featuring through performance and augmented reality increase people's ability to imagine otherwise, to envision alternative social imaginaries from those we expect or are anticipated for us by councils, developers or corporates? Can such interactive, immersive approaches, which make possible futures feel closer to reality, and therefore more probable, prepare community members for taking agency in shaping their places? Through such participative and prefigurative methods, can we pre-enact possible scenarios, both to inform planning decisions and rehearse more democratised or social forms of local decision-making? We hope Future Places Toolkit can build people's capacity to anticipate, the etymology of which I like. And the etymology is to take care of ahead of time, to care about or for their own, their community, and others' futures. So I've got a short coda. One of our inspirations was Anna Jane of the speculative design studio Superflux, who writes that she hopes that through the lens of the future, people can reflect better on the present, on the decisions and the actions we take today, on where we want to be and what we can do to get there. on where we want to be and what we can do to get there. For mitigation of shock, Superflux materialized a future scenario, building a full-scale physical prototype of an apartment in Singapore in 2068, radically adapted to deal with the consequences of climate change. Their installation enables you to explore the apartment o newid y cyflwyniad. Mae eu sefydliad yn gallu eich gallu ysbrydoli'r adran a phrofiad mewn ffordd ddiddordeb, sut y gallem byw mewn byd wedi'i hyblygu gan ymgysylltu'r gyfnod. Gyda'r toolkit Ffuture Places a gwaith eraill gyda chyflwynwyr heb ymweld â'r Centr am ffuturau digidol, fel ffutur soundings, which this is an image of, I'm interested in creative and immersive approaches to futures, particularly what Stuart Candy calls experiential futures. Speculative, virtual or mixed reality spaces can be created through or from which we can reflect critically on the present, on dystopian futures currently in the making, or on the potential unintended consequences of today's decisions? Or, turning to hope, critical hope, perhaps, can XR technologies like games engines, spatialized sound, AR, and virtual environment design be applied to enable communities to co-design and experience preferable, sustainable futures and imagine and materialize alternative, inclusive scenarios. Thanks. Thank you very much. We are great in time, which is super. So is there already some questions from the audience? A microphone, please, to Sonja. Thank you for presenting this great project and toolkit. I would be interested how you document the outcomes of the sessions and how the communication works with the decision makers. Yeah, so that's relatively new to us, but for the last year we have been working with Bristol City Council and in fact there's been some co-design of the reporting processes, which has partly taken place through us consulting with architects who run engagement with communities and are familiar with reporting and also with the council and working out what it is that they need. But these are the kinds of visual reports that we've been producing. I mean, I'm showing you the visual reports. There's also some analysis, for instance, of themes around kind of wants and needs that arise. But yeah, these are some of the reports or the visual aspects of those from the Temple Quarter redevelopment consultation. And I think I had further, oh, it's probably a long way back. I also had some illustrated, some annotated illustrations which were from Birmingham settlement and the consultation we were doing there around their neighborhood and well-being center. But we're in the process of this collaboration with Architecture Zero Zero, who are a really interesting architecture company, who are also interested in systems design. They're doing a project around Norwest and particularly Filwood Broadway, the high street that we've also done some earlier work on around the public realm. And we're about to go and do some work there where we'll be presenting some illustrations of scenarios, so starting to illustrate some options there for a kind of thriving high street that we'll then discuss and adapt through the collaborative drawing sessions with the public. But yeah, we're working really closely with architects and the council to work out what their needs are in terms of reporting. you know, recognizing that there are formalities in planning processes, for instance, like having a statement of community involvement that there are certain, you know, requirements for. Yeah. Thank you. So, oh, there's other questions. Great. So now you have to choose. There's a question from our live stream. Axel asks, how was your experience with the tracking quality using the geospatial creator? Well, it's actually pretty good. I mean, it's an interesting kind of trade-off because the most accurate match that we've achieved between the real world and the artist images ymlaen â'r byd gwirioneddol a'r imeiriaid artist yw'r hyn yr oedden ni'n ei wneud yn y cyntaf, gyda'r trefniad o imeiriaid yn y fersiwn 2D cynnar a chael ymddiriedolwyr i aros yn un pwynt a chyflwyno o'r un persbectif. Ond mae'r trefniad yn eithaf da nawr gyda'r geosbeisial, ond rydyn ni'n defnyddio But the tracking is pretty good now with geospatial. But what we are using is also... So we're combining the use of geospatial with a tracking image. And so there's always some minor adjustment of that tracking image manually to get a really accurate synchronisation between the virtual and the real spaces. Thank you. It's a short question. Thank you so much for the presentation. You mentioned this VR application where people can draw and kind of sculpt, right? I just wonder, is it commercially available? If yes, where? It's not commercially available, but that's something that we're exploring. So at the moment, what we've been focusing on really is the designing of the software for this AR engagement activity that we deliver ourselves. yn ymwneud â dylunio'r gwasanaeth ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer y gweithgaredd ar gyfer cydweithrediad cydweithredol. Ac hefyd, mae gennym syniadau y gallai fod yn ddefnyddiol i bobl mewn dylunio digwyddiadau neu dylunio ffestiwl. Felly mae gennym syniadau gwahanol am sectorau a chydweithredau y gallai fod yn ddefnyddiol i, neu i. Ond dyna'r cam nesaf, ystyried a allwn ni ddatblygu hyn fel software neu software fel gwasanaeth y gallwn ni ei gynnal i eraill i ymgysylltu mewn cydweithrediadau eraill. as a software or a software as a service that we could then offer to others to apply in other contexts, whether that's in a kind of planning consultation context or actually for uses in other sectors too. Yeah. Question here. Hello. Hi. Very interesting project. I had a question about the aesthetics. You had mentioned, and I'm sorry if I got the wording wrong, but that that was deliberately chosen, that aesthetic, to leave it kind of open-ended. Was that the use of the word? Open-ended. Was that the use of the word? Open-ended? And I'm sort of grappling with that in my mind because for me it isn't open-ended. I'm looking at the world from a child's perspective and the audience members, the people participating, appear not to be children, appear to be mostly adults or young adults. So I was wondering if you could comment on that. Well, I mean, I suppose the idea is to, I suppose there's two things that are worth saying. So one is that you're seeing a software tool or drawing tools that are in development. Some of the earlier images you're seeing, actually the tools are not as sophisticated as we would like them to be. So the artist is working with quite limited tools and limited palette. As the images go on, you're starting to see people working with more sophisticated tools. Of course, they're also working, they're drawing immediately. They're drawing live in VR whilst listening. So there is a need for actually the illustrations to be relatively simple and illustrations that can be produced quite rapidly, even though they are now able to draw on a library of pre-drawn objects and also, yeah, pre-drawnchion cynhyrchion sy'n cynyddu'r broses yn eithaf cymaint. Ond ie, roedd ymdrin yn y broses datblygu ac allai fod yn ddewis. Allai fod wedi penderfynu mynd gyda gynhyrchion ffotorealistig a llyfrgell. Oedd hynny'n ein llynu at llyfrgell, yn fawr. Ond allai fod wedi mynd gyda gynbodaethau ffotorealistig, ac yn hytrach, rydyn ni wedi penderfynu, yn rhan oherwydd ein bod yn gweithio gyda gweithwyr sy'n ysgrifennwyr, yn newid i'r VR, ond hefyd yn sgwrs â chyfarfodwyr. Felly, mae chyfarfodwyr sy'n testio hyn wedi dweud, dyma'r peth sy'n dod â sgethau a phlaithfawrthu yn ôl i'w ddrawi architechtol, a oeddant yn ei chael yn amlwg o ran gwaith model 3D. Ac mae'r mwysigrwydd o'r cyhoedd a'r architeithiau sydd wedi cydweithio â ni, yn y gwirionedd, oherwydd bod hyn yn ddefnyddio ar gyfer ymgynghoriad, yn hytrach na dangos i'r cwent, er enghraifft, beth fyddai adeiladu'n ddiweddol. because this is for use in consultation rather than in showing to a client, for instance, what a finished building might look like. The idea was to keep this rough and sketch-like and illustrative because certainly the feedback we've received from them is that that feels less finished, less of a fait accompli and much more kind of open to the imagination and and much more kind of open to the imagination and yeah to change and the input of the public in those communities. And in a way playful and that's where actually because you know the sort of childlike aesthetic could be read as a kind of criticism, but actually, you know, we do want it to feel playful and open to more speculative imagination that takes people out of their kind of everyday concerns around kind of constraints and scope and opens up some of the themes, the wants and needs in other ways that are not limited, perhaps, of the concerns around either constraints or resistances to developments because you maybe fall into a not-in-my-backyard kind of attitude rather than thinking perhaps more constructively about, you know, the potential of developments that might respond to needs and that you can buy into. Yeah. I think here is a maybe last question, I would say. Yeah, you, I think when there's a collaborative effort in a digital space, you're bound to get at least some degree of socialization. So have you ever experienced people wanting to write messages to each other? Or do you even appreciate it? And how do you deal with it? I suppose, again, there was a decision in the development process that the drawings are filtered. The drawings and also any text or annotation is kind of filtered through the artist and that what we are hosting and facilitating is a conversation in the place. But there was, at a certain point, that we were also interested in, well, can the participants draw with their mobile phones? Can the participants place 3D objects? Or can we kind of annotate? And those things are possible. At the moment, it's usually me who is writing annotations for the artist about where to place objects or what the key themes are or ideas are that have come up in the conversation. So no, at the moment, the public are not participating in that way, partly because to keep some control over this already, we think, quite diverse and multivocal kind of vision, because we're also not necessarily trying to reach a sort of consensus. So actually we are trying to represent all of the ideas that come up, even if some of them might point in quite different directions or towards different futures. So yeah, at the moment people are not annotating or drawing or contributing themselves, but that would be possible and that's something that we could explore in future development. So maybe I would come to some closing remarks. when I saw the images and videos, what you could really see in the audience was they are having fun, which is great and which I guess is connected maybe to your theater background because you not necessarily always see this sort of joy or fun in media projects which are connected to, or collaboration, which are connected to AR or VR, which are connected to AR or VR, maybe even more, or Excel, whatever. So congratulations on that. Just a little remark on that, which is just to, again, to sort of return to the sort of playfulness. You know, one of the functions of this, and it does kind of come from having a theatre and performance background or a participatory performance background, is actually we are trying to inspire people and scaffold their imaginations and also to engage people and maybe to engage other people than those who would tend to come and get involved in a planning consultation which you know tends to as I said tend to be perceived as dry so the idea is partly that it is entertaining in itself and has some value in being entertaining and engaging as an activity to participate in itself. That's quite important to us. But yeah, I think you were also asking about people coming to have conversations about future possibility, for instance, around AI. We're also open to those. We're also open to those things. So if people have ideas for kind of future development and sort of interest in this project, we recognise that there would be collaborations between our artist and AI that could take these illustrations and realise them as photorealistic objects or, you know, there would also be other ways of building new functionality into the tool in the sorts of ways that are coming up, for instance, around being able to also show other aesthetic styles like photorealistic architectural models. So, yeah, if people have ideas or want to speak more, chat with me over the next couple of days. Thanks a lot. And yeah, thanks also to Irina and to Slow Bros, Ole and Onat. I think you're all around in the next hours and days, so maybe you're open for questions so thanks a lot for this quite diverse topics that we discussed here and to the team also thanks a lot for the live Rigi could you show the you know now I think as far as I know and now there's a break of one hour. Oh, great. The echo, playful until the end. The slide with what's happening in the afternoon. We have a break now, we deserve it. And then there's a welcome for the conference part so that in the afternoon there will be shorter talks which are based on paper submissions so and yeah I would say see you so thank you and see you in an hour Thank you.