Thank you. Hi everyone, welcome back. This is Art Paper's, Art Research Paper Session 3, the last and final session of the Art Research Paper's track. My name is Victoria Szabo and I will be moderating today. Our first paper is by Greg Bott and Musab Gargouti. Greg Bott is a lecturer in digital arts at Caliwith College. Musab Gargouti is a program leader and lecturer at the University of Plymouth. Please join me in welcoming them. Hello, my namew Greg Bart. Dydw i ddim yn mynd drwy'r hyn rydyn ni wedi mynd drwy, ond, gobeithio. Rydyn ni'n hynod o ddiolch i fod yma a rhannu ein papwr gyda chi. Yn amlwg, mae'r taitl yno, nid wyf yn fathu i ddarllen y pethau. Ond yn y bôn, cychwyn ein prosiect yn 2021 fel cydweithrediad rhwng fy hun, ar y pryd, a'r Ymchwil Gwyddon o Ymchwil ar Ymgysylltiad Plymouth yng Nghymru, a'r Ensemble Gwasanaeth Gwyddon o'r Bristail. Yn ymlaen, roedden ni'n cael ein cymorth gan Andrew Prior, y gweithiwr arall, neu gweithiwr arall, ar y bwrdd, mae'n arwain ar hyn o bryd. Y gweithiwr cymorth, sy'n arweinydd arbennig mewn ddigidol dylunio ar Ymgysylltiad Plymouth hefyd. Ers ei ddechrau, mae'n bodoli ein prosiect bob amser who is subject leader in digital design at the University of Plymouth also. Since its inception, our project has always been about how devised theater and immersive production, two radically different temporalities, can be more closely integrated with one another, both ideologically and creatively. As such, we've tried to narrow the space between theater and VFX from the outset, which has involved a wealth of learning on both sides. At the very beginning, these attempts at integration were, by their very nature, quite experimental. As we sought common ground or points of convergence between obviously two quite radically different temporalities in theatre and VFX, we were using motion capture, projection, visual mapping, Unreal Engine's quick environment creation tools, which Musav will discuss a bit more in a bit, Lidar face scanning, MetaHuman to bring scanned actors to life, digital sculpting, tools which, in the end, which, sorry, we were used at pretty much all stages of the project. Throughout, it felt like we were always striving to find ways to collide theater and digital practices. But instead, we kept creating these ephemeral meetings of the two, wherein some small, strange thing might grow into something bigger. As such, there was a couple of key questions, which tend, well, they arose over time and kind of crystallized for all the different people that were working on the project. How can the dueling temporalities of theater and working with immersive technologies be navigated successfully? What is an appropriate methodology for devising media? That is, an approach to media creation that parallels the methods of devising narrative within theater. How might this approach understand virtual media assets as qualitatively different from analogue dramaturgy, scenography or props? There were a number of key factors at play here which we'll get to, but one of the big problems we encountered was this. So we spied this last year when we were watching Chris Salter's talk back, and this particular quote really resonated with everyone on the project, so I'll just allow a small pause for you to read it. It's not only, as Chris Salter points out, a big area of research. In our project, this was a central point of tension throughout. Ideas are fast, a gesture is fast, the meaning conveyed in a pause or a subtle change in intonation or expression is fast. Devising theatre is definitely faster than technology. The question then became not how technology could be sped up, but what could we do as technologists to better align them. So we had to find a new way of, once the funding for a bigger hour-long performance was greenlit by the university in 2023, we had to start tinkering. An idea which is analogous with Claude Levi-Strauss' concept of bricolage, French for tinkering or sort of do-it-yourself. This is actually taken from our paper, so this was quite a fundamental part of the methodology we developed kind of while we were working together with the theatre company. But I'll read it out. Combining the words thinking and tinkering, the concept of tinkering came to provide a useful way to understand this approach. The term simultaneously frames making practices as a space for reflection and thinking practices as a mode of making. In a sense, the idea of tinkering evolved from a misconception that theater or tech could be forced into some kind of coexistence, where one tool could mutually constitute the other. But this only really started to make sense once we realized that technology did not need necessarily to be a tool. It had to become a performer too. As the project progressed, this new methodology, the methodology of thinkering, enabled us to respond to and navigate the aesthetic flux that is an essential aspect of rapid devising processes or praxis. It helped us, the project's technologists, bend time-consuming tools, such as Unreal Engine and other pipeline games development programs for 3D modeling, animation, and media creation to meet the rapid, ephemeral, and often capricious timeframes of devised theater. In a sense, we had to unlearn the intended often slower pipelines of these tools and treat them more as being real-time tools, essentially, issuing polish and giving agency to low-fidelity aesthetics and glitches. agency to low-fidelity aesthetics and glitches. Within the span of the project development, these elements operated less as a means of simply turning something around quickly, but as provocations, motifs, and storytelling agents in their own right. The theater piece we went on to produce, entitled Playable Character, explored the possibilities of using Unreal Engine's real-time tools in more of a performance context, but with an emphasis on storytelling, immersion, and interaction. To explain how we started doing this in a bit more technical, practical sense, I'll defer to my fellow technologist, Musab Golgouti, who joined the project once it had secured this additional funding in 2023. Musab Golgouti, So we began by collecting a range of ready-at-hand assets from the Unreal Engine store and the Quixel Bridge, creating a library of assets and environments. These assets were kit-bashed together into scenes as a kind of pre-production process, though most of it, if not all, scenes were continuously modified and reflected on throughout the production process. Effectively, we'd moved the post-production up front and prepared as much of the content as we could before we started the two-week window. the two-week window. These workflows are vastly quicker than those of traditional 3D modelling, lighting and animation pipelines, but it was still significantly slower than a dynamic and fluid workflow. Our thinkering approach enabled ongoing and reflective development and gave rise to multiple new settings, scenes and animations. Originally, we'd planned on having the show run in real time, so modifications could be made during the performance, much like virtual production for film. This presented some practical challenges, like having the appropriate hardware, as well as loading times, as well as the difficulties of live editing an environment at the same time as running it as a display during the performance. Over the last few years, games engines like Unity and Unreal have implemented new systems like Nanite and Lumen. Nanite is a 3D file handling system in Unreal 5, and it allows users to import far more detailed assets with really high poly counts. Lumen is a real-time global illumination and ray tracing system that allows us to change the lighting as required on the fly. These new systems enabled us to use multiple complex 3D assets that were created by 3D scanning and traditional photogrammetry without so much optimisation or retopology, processes that would otherwise be very time-consuming. As an example, just the scans of the actors were approximately around 4 million polygons each. We made use of a range of 3D scanning technologies like the Artec platform that uses structured light as well as traditional photogrammetry tools. We also embraced the animation glitches and scanning artifacts that arose as part of the thinkering aesthetic. These scanning technologies were also embedded into the narrative of the performance. We also explored the idea of scanning audience members. The idea was to include the audience into the visualisation and the narrative. Despite the process of scanning becoming much faster and the ability to use very high polyassets, we found that it still took around 20 or 30 minutes to conduct the scan as well as rig and animate the characters. It was also heavily dependent on a stable internet connection and as we were using MetaHuman, which is a web-based character creator, which also generates fully animated characters just from a simple face scan. Because of these limitations, we decided not to use the scans of the audience in the production. When it came to character animation, we created two versions of the actors using different systems, then alternated their use depending on the requirements of the scenes. First, we used a Vicon system for bespoke motion captured movements and then applied that to the MetaHuman. We also used Mixamo, which is a free online rigging and animation library, and we used this for animating the full body scans of actors. For future development, we want to explore the use of alternative motion capture systems like Xsense, which is a portable motion capture system that doesn't need camera tracking systems at all. The idea is to use this sort of system for real-time live performances of the actors and props. The ability to render scenes in real time significantly changes the traditional pipeline. They're just rather linear, just as is the case with virtual production for film. Rendering became responsive, as in we could afford to dynamically change the direction quickly and make changes on the fly as required. The resulting performance, as you can see from the clip behind us, really does embrace the diverse tools and softwares we used, rejecting the high gloss of big budget theatre and VFX for a more intimate and authentic portrayal of digital media. The idea was to allow the possibilities of the software, not just in terms of what it can output, take centre stage, which also meant focusing less on a final outcome or outputs and embracing the aesthetics of creating, manipulating or tinkering and tweaking, bending or breaking, not perfect production or perfect post-production. As the project developed, it became clear that the methodology of thinkering had a kind of agency in its own right. The script itself is a good example of this. At no stage of the production were we ever working with a fully-fledged script. It was written as a part of a devising process, but the movement or erosion of explicit modes of devising or thinkering began to affect the narratives of the performance. The aesthetics of the software, both in terms of creating content and also the user interface, had become such a powerful storytelling device in their own right that by the end the younger of the two protagonists was written into the play as a games designer. Essentially a meta commentary on the role we were performing as creative technologists. So in conclusion, we kind of feel like theatre and VFX being sort of mashed up in this way is quite a new area of or an emergent, I should say, emergent area of research. From the outset, we felt like the lack of alignment between these temporalities of devised theatre, virtual production, we felt them keenly, which meant that the approach that we had to find, this ThinkRing approach, enabled hopefully a better and richer collaboration between both obviously theatre practitioners and us as technologists. As a methodology ThinkRing designates technological interventions as conditionals, proxies and placeholders within an emerging narrative, encouraging the use of low fidelity materials and or media to create faster turnarounds. Within the span of our project, Thinkering led to novel solutions being arrived at and a more joined up way of working between narrative and media assets and agents. Furthermore, Thinkering designates software not as points necessarily within a linear production process, but as nodes within a discursive field, a shift in perspective we found hugely transformative in the devising of our playable character. Thank you. So we do have a few minutes for questions. Does anybody want to raise a question? I can start. No, you can go ahead. Hi, thanks for this presentation. I have to admit, I didn't understand the concept of thinkering. I mean, for me, it sounds like trial and error, just trying things. But can you elaborate a little bit more on that so it's more clearer? trying things, but can you elaborate a little bit more on that so it's more clearer? Yeah, sure. So tinkering, I'll go back to that sort of idea of Claude Levi Strauss's idea of it being almost like tinkering or do-it-yourself. We wanted to try and, I guess, create something. We knew we couldn't, with the tools that we had, we kind of had to use them. They were kind of locked in. We couldn't resource that much. It was quite a small project initially. So actually trying to find novel or new solutions with what we had was kind of a really key aspect. And obviously, it was more a way which, if I go back to the slide where it's kind of talking about, where it has that quote, sorry. No, no, no. This one. This idea of it essentially making, well, making practices a space for reflection and obviously thinking practices as maybe the space to kind of come up with novel, creative making solutions as well. But yeah, there was an aspect of that. It does feel like we had to kind of incorporate this idea of everything being on the fly because dev device theatre is such a, I guess, well, it's a kind of crazy beast and trying to tame that with technology, which is obviously generally a longer format, longer term type solution finding thing. There was some very beautiful imagery at the end of the presentation. I was wondering how you wrote this, how was it working with the script together with the technology? Yeah, so essentially there was no script as such. It was continuously developed. That's kind of part of the thinkering aspect of devising a production. So obviously there were some guidelines and some beginning points to it. But yeah, it was a kind of reflective process of working with the actors and performers and continuously modifying and reflecting on what we were doing. They would make suggestions in real time. The team were able to modify and change and re-render and re-scan elements. So it's very much a organic sort of fluid process. So and as a result new things were discovered and new fluid process. As a result, new things were discovered, new animations, new ways of using these assets were discovered and implemented. Thank you. Firstly, thank you very much for the presentation. Even the presentation was good. I have one question. It's not clear for me. I'm making a video content for theatre myself. And for me, I don't understand the point of such approach. I just want to maybe... Because, you know, theater production is a quite long process. Especially if you're making a ballet, opera, I don't know. If you're speaking about classic theater. Even if the performance is not a regular, but will be played once. So, did I understood you right? It's more about aesthetic approach, not a technological approach. Because there are some other approaches to make virtual rehearsal. For example, the people in Russia, they make a virtual production for Mariinsky Theatre and something like that. But it looks glitchy and it looks like something more aesthetic and about art, not about accelerating the theatre production. I guess the first thing to know is that we had to, as Musa pointed out, kind of build aspects of pre-production into the run-up to these quite short windows that we would have access to either theatre people, kind of build aspects of pre-production into the run-up to these quite short windows that we would have access to either, you know, theatre people, theatre tools, theatre methods and whatnot. So much of what you've seen there kind of, we didn't have a script, we didn't have much in the way of kind of, I guess, virtual content. So we were having to build it all into these quite short, I guess, working periods. There was one of these periods in January of this year was one week long. The other one, obviously, off the back of that, we then started to outline, I guess, vignettes or ideas for maybe story elements that could be then kind of translated into VFX. But essentially, we did the bulk of this in two weeks, which is why we felt, well, when we were working, we sort of started to think about two weeks, which is why we felt, well, when we were working, we sort of started to think about the kind of research aspect of the project, which is what we'd all been brought in to help with as well. And yeah, we did feel like we were trying to find a methodology which essentially would move quite easily between theater and VFX and kind of enabled a shared language and also all the various learning that took place on both sides. I appreciate it probably. Probably does look a lot like standard, well, aspects of standard workflows or pipeline and stuff. But I guess the time-boundness of the projects really accelerated having to find a new way of working a new methodology. As experimental theater, it looks great. I just don't want to attack you. Sorry, I didn't have such wish. Okay, then can we say that it's like a new approach for our experimental theatre, like a theatre laboratory? I don't... Well, I would like to think it's novel, that there is experimental aspects to it. I think actually the way of being part of the devising process, which I guess is uncommon, typically when we use VFX, it's in post, it's after the fact, isn't it? It's a much longer time frame. So, yeah, having to embrace the lo-fi, embrace the glitch, embrace kind of the low fidelity qualities of, you know, not having that time to post-process stuff was a valuable thing which as Musa pointed out, it did end up kind of taking on facets of story and actually, yeah, it wasn't the theatre people were really interested and engaged with what would break as much as what would go well. So there was this constant kind of flow between kind of both parties in terms of tech, in terms of devising theatre and stuff. It was really cool. Thank you. All right, I think we're about time for this one. So thank you for your presentation. So thank you for your presentation. Okay, next up, we're going to hear from Bin Chen. Bin Chen is a media artist, design researcher, and educator at the China Academy of Art. And is present in the room I hope. Well while we're getting set up I was just thinking about how so often new media art takes on the subject of new media and I feel like there's a little bit of that in a lot of the presentations that we've seen especially the last one just now Thank you. So we're going to hear some questions before we get started. I'm curious how many of you feel it's difficult to teaching in the universities like I mean for me I mean for me it's it's quite tough thing because because especially I was training in UK in Europe but now I'm teaching in the Chinese system so sometimes people get confused I get, and then we both get confused. But the good thing I think for me is all of this, so we could work out something new together, which is something not that traditional compared to the design education in China. This is something I would like to share with you today. Yeah. So now I'm going to quickly open my keynotes and yeah. And I can even tell you the title of the paper that is about to unfold before you, Some Practices and Reflections on World Building as a New Path for Design Education under Oriental Thinking. Thank you. I'm not sure. We're on. Hello, everyone. Sorry, it's a bit accident before my presentation. Because I just arriving yesterday, so basically today I'm in a very heavy jet lag. I think also the same as my laptop as well. So if there anything you don't understand from my presentation, just please ask me after this presentation. I would like to communicate more like in the international world. So yeah, so good afternoon. My name is Bing Shen. It's a bit strange Good afternoon. My name is Bing Shen. It's a bit strange spell compared to the English names. I'm honored to be here to share some of my case studies and thoughts on the design education experimental course I'm teaching in China. So it might be helpful, this is me, this might be helpful to introduce myself a little bit. So my background starts in design. I will study at MA Design Interactions at the Royal College of Art under the professor Dan Raby. I'm also a media artist before I go to the Royal College. I often explored dynamic relationships on human and technology via my work. So furthermore, I'm a transculture facilitator based in both Japan and China and the UK at the moment. So besides being a media artist, I'm an educator focused on researching new design methods in academic context. So in recent year my work has primarily centered on developing interdisciplinary design approaches and exploring innovative design education in Asia. Also I'm teaching in several universities mainly in China right now so such as Tsinghua University's CAFA, the Central Academy of Fine Arts, and the China Academy of Arts. So these are two universities in China. So today I'm going to share some of my educational practice on world building course that I'm leading at China Academy of Arts. This is also the only course, world building course in China in the current dates. So in Chinese words, on spell, technique, art, sharing the same characters, shu, which you can see in the the word, yeah, this one, it's called shu. So, shu means ways, methods, paths to reach something. So, in Chinese. Shu to the approach in English is also an engine for self-way of thinking. Just like other creation in the world, I think Asian creators designing in their own way. So for example you can see there's three videos under their diagrams. So on the very left side, it is a normal thing every morning in the Temple of Heaven in Beijing. So basically all these elder people, they develop these techniques to stretch their bodies every morning like in the Temple of Heaven. So there is a story there that if you don't have any techniques, you won't be able to get into this park for exercise. So there's a, there's a, yeah, there's a group of these. So in the middle, in the middle is a friend of mine who did this exhibition in the UK called Shanghai Archaeology. And so she collects these mobile phones from the Huaqiangbei in the Shenzhen Tech Building. So people actually made this in Shenzhen, and there's customers for these kind of products in China as well. And the very right one is Sorry, it's just too quick. So the very right one is a TikTok influencer called Henry Geng, Shougong Geng. So he's like making lots of weird things like by himself in the suburb area in China. Yeah, you can see lots of his fun videos on the internet. So back to design. So how do we use things for like, for example, this adobe touch in the creativity? And how do we design? So in the information rich environment, instead of having easy access to the information, actually the design students are more easily drawn into it. So also passively education drawn to the past made the students tend to learn in a more modern way. So especially the Chinese students who tend to look to the tutors for the right answer all the time before they just start thinking for themselves. So this is a very special I think the education in China is from the old days. So the world building course stack holds passive learners and those who want to break out their solid patterns in the past. So work building presents a diagram shifting in building students with a holistic perspective and highlighting the interdisciplinary of diverse system to compose. So encourage students to looking for weak signals from the real world and using this foundation gradually interact, interact, integrating the creative advantages of like radical thinking like very not logical thinking during the logical systems. So what I'm trying to do in this course is I separate this five weeks course in three different files. So the first file is called Liberating Nature which is I gonna focus on how to to change the perspective in from doing the research from the real world. And the second first is to how to speculation from the real world material into the fictional world. And the third first is to communicating the work to the outside of the self. So which is I trying to fix the three issue in the design education in China, which is the first is provocating thinking, the second is problem-probing, and the third is external communication. So in this series of files. So the first file is called liberating nature. So in the first stage, before the world building course really start before I teach like the design method so I will always took one day and go with the students out of the classroom to do something that they would not normally to do or allow to do in their life. So for example breaking a ball is a very serious things in China. Because ball is very important, it's for rice, it's for eating. And if you're breaking the ball, if you're breaking the ball, your parents are going to be very angry with you. And you won't be allowed to eat a meal in a day. And you can see the student actually is quite happy with it because they did something that they are not usually allowed and during his progress I will ask him first to break in a ball and then rebuild it in their imagination or in their thoughts so actually you can't get the whole ball back the perfect ball you can't get it instead of it they start to using their creativities to make a ball in their idea. So every year the Liberty Nature is quite different, the content. So last year I bring the students to the small town in the south of China and ask them to print the real world from the printing making tools. So in that case they unfold like all their like old like kind of pre-image of the old time. So for example if you see a town and you will think oh there's a river there's a mountain and there's some people with traditional clothes. But in this time, they're trying to find some details from this old town they used to go and printing out the things they like. So you can see, like, including plants, including patterns on the wall, including their face. So they're printing everything like they see in everyday life. So why I doing this? It's just because I want to bring them from the fantasy to the attempt to present, to teaching how to collect the real world material and making that as a research references to building things beyond that. So especially for the Chinese students, they easily, when they do creativity progress, they easily go to something like really far and really not logical, linking behind. But actually there's some sorts behind the way, but it's kind of missing every time when they doing the design. So in case they going too far in idea progress, the way but it's kind of missing every time when they're doing the design. So in case they're going too far in the idea progress, so I kind of help them to lock their idea down to the alternative present, the stage. So next please allow me to share with you some of the past course progress and the students' work. So yeah, the course I'm teaching at China Academy of Arts is for the third year BA students. So actually, the first year they had this very basic, like painting and calligraphy and like very basic training, art training. And in the second year, they're going to have some technique training like AR, VR, and all these 3D software stuff. And in the third year, they're going to come into this creativity year. So before the fourth year, before the graduation, they have to prepare and to train how to design, learning the design progress in the fourth year, before the graduation, they have to prepare and to train how to design, learning the design progress in the third year. So every year I had different brief. So in this year, it's called Heat, Heat, City, which is I trying to ask student to imagining when the city getting burned. So what could happens in this city, the culture system, body, politics, and this kind of stuff, extra, extra. So actually in the year I was given this topic, the funny thing happened. So the Hangzhou, the city where the China coming off us located, actually they had the most hottest summer in the whole history. So it's kind of coincidence, the reality and the brave mixed together. So perhaps because of such a coincidence with the real world, students had many ideas and were very active from the very beginning. So this is the second week of the five weeks spending time. So the second week they start analyzing and put everything in the diagram and information modes to analyze their idea progress. So from the second week the research began to deepen. So in every stage, I'm using different design methodology for the students. The second week is called McKinsey Framework. Actually it's for the economic trends analyzed, but here I'm using that to probe in the programs from the real society and the third one is a special speculation process which is you can start using your real-world material and then from there going to a near future and start to imagine so here have three students work I will quickly go through with you guys I see the time is not enough right now. So the first one is called Fungus Energy Inc. So project immerse Participations in the world powered by bioenergy. So within conceptual framework students focus on exploring alternative energy source framework students focus on exploring alternative energy source to the unique climate conditions of the stimulate world so students in mark on the investigating journey into the variability of fungus as a bioenergy source their efforts as establishing a fictional energy cooperation presenting through a televised commercial animation and spread out on SNS in which they advocate for fungi as a novel, eco-friendly energy solution for humans' future needs under the specific conditions. And on the top is their site in a small exhibition after this course. Can I have a sound a bit down? Thank you. So in the second project is the Efficient First project is from the Brave of Artificial Intelligence City. So this project students grab with a future focus on the future of work contemplating the implications of scenarios where artificial intelligence assumes full control over the whole employee employment opportunity. So aimed with these inquiries students craft an interactive game with a project, inviting participation to complete with AI for employment opportunities, which is an AI interview game. So through this experiment's gameplay, the project promotes reflection on human strategy with the society, and technology, democracy, and promoting consideration of the direction of human agency. And the third one is a quite special one. So even though this course is about the future and fictional worlds, so I encourage students actually they can do anything they want even actually they can do anything they want even from the very different angle of the creation. So this one actually the students trying to reverse the problem of the VR game into the as a tool to use in his design. So basically in the VR, people always feel dizzy. Maybe it's because of this device or human body reactions. But he actually using this dizzy to create this, what is called in the English name, parkour game. So actually the player can keep on running in this game with a dizzy mood and trying to get to the end of the Great Wall. So this represents a very special form of the technology through a unique experience. I'm finished. OK, the technology through a unique experience. I'm finished. Okay the third one is display. So yeah I guess yeah so the main aim of this is trying to bring the students to get out of themselves and trying to communicating the people who don't understand this project, who don't know this project and to don't know this project, and to see how people's feedback and comments. And yeah, reflecting, so it's going to the very end slides. So reflecting on the work building, it serves as a visual research method through design practice. And actually, I think the work building is not only for teaching in the design education, it's kind of open up, it's open, unfolded design progress which you can use for research, for designers and so on. So actually this year I'm also developing a new toolkit for inclusive design combined with the work building and combined with the gamification participation actions for the next generation designers. Yeah, thank you very much. Sorry, I made it too long. If you have more questions, please ask me afterwards. Yeah, I was just about to say, Okay, but thank you. Thank you. Yeah, and please do. I'm sorry, we don't have more time, but we need to move on to the next one. So do we need to unhook you? Yeah, where's the other laptop? Ask questions while we're doing the switch here. Does anybody have any questions for Binchen while we switch? Maybe first in the back. No more awkward sirens. Thank you for the presentation. Very fascinating projects. What fields of design does the students come from? The students from animation and interactive animation game and interactive media department. Fabulous. Is it group projects? It's a group project. Five people in the one group and there's a five or six group every year. Thank you. Thank you. Thank you so much. Do you have any insights about the effectiveness of your approach? I mean, you're proposing like the three phases that we can use in education. Do you have any insights? So whether they are effective or because there are so many like didactic approaches how to do education yeah can you elaborate a little bit on that like about insights maybe data or empirical evidence or like qualitative evidence whatever how did how detailed do you mean in terms of like because you introduced like the three phases that we should use kind of like, like you proposed them. And I was wondering whether they work, you know what I mean? Because maybe they just do not work. I don't mean that, of course I trust you that it works, but you know what I mean, like maybe do you have some insights on that? Okay, yeah, I think this refers especially to design for the Chinese students. As myself, I learned design in China as my BA. So I actually suffering from many progresses. But I mean, I think these three files is just focus on the general problem that students could have in their BA stage. So probably it's not works for all the students. So, you know, students always different right so in my course I guess 90% of students after the course they feel it's fun and it's something new and they start become more active compared to the before like the students usually very shy they don't asking questions they don't really like active for researching asking questions no they don't do like active for researching, asking questions, no they don't do that. So after this I think they at some point they changed because from the graduation show feedback the tutors, professors have really positive feedback to me. So I think at some point it works but I think still it's in the very beginning stage. So I think I need to develop more deeper and further of these kind of three facets. Maybe there's five facets in the future. Okay, thank you. Now you did speak a lot about world building as a methodology that's been tried in various institutions in the paper that people can of course look at. All the papers are available as a PDF from the expanded website. So, but we need to eat to move. So thank you. Thank you very much. Thank you very much. All right, our next speaker is Juan Carlos Vasquez. He's an assistant professor at Sean Jiao Tong Liverpool University. Hello, hello, test. Okay, thank you very much. I'm very happy to be here sharing my paper presentation. It's entitled From Dice to Metaverses, Gamifying Musical Experiences. Super quick sound check. Yes, perfect. All right. So this paper talks about the three things, gamification and music, right? So it's the integration of game-like elements into non-gaming environments. In this case, we're going to talk specifically about music composition, performance, and listening experiences. It's an historical scope, so I dig out in references and similar projects that have these characteristics from the 18th century to modern digital metaverses. And the key areas of focus are how gamification has transformed musical creation, also have performance techniques and especially audience engagement which is one of the critical parts of why to use gamification in music performance and composition. So when it comes to the analog era we have some examples, this specific one called the ever-ready Minuet and Polonaise composer which is the first known musical dice game. So of course, we're talking about a time in which there were like limited amount of elements to integrate into musical composition. Most likely, they used cards and dice, combinatorial approaches in which there were snippets of music distributed in several cards, and then you use the dice to select which ones to use. Then you had this sort of jigsaw puzzle of music that then you could read from beginning to end. So there were like a number of combinations in the end. We have a piece that is attributed to Mozart, but this is controversial. So there's people say, well, this is actually not made by Mozart. It's the image that you can see here to the right. This matrix, it does have a catalogue number K516 and it's a very prominent example in which you also have this combinatorial approach, so these numbers they represent more like fragments of music that you also put together in apparently 46.9 quadrillion possible combinations. And in the 19th century, we have this interface that you see here, this small image here. It's actually, so it comes with a box in which you can situate, you can place these different cards with a small piece of music for piano, and then you also use dice to select which ones you're going to place. The beginning and the end, they are, of course, they have to be standard, but in the middle you can use any number of combinations for a total of 128 million possibilities. This is currently, it's also very rare in music composition creating not only a composition system but also an interface that allows you to actually execute the system, which is quite interesting in this case. So in the early 20th century we find quite a few other examples. There was a shift in aesthetic trends so moving towards more experimental interactive forms. For example we have Rainforest 4 which is made by artist David Tudor. It's this large scale installation in which you have as you can see in the picture You have these sort of sonic entities made of found objects They were producing sound all the time, but they can also be activated by the audience So this is a collaborative exercise of music making right and kids It was very popular with kids and actually was something that David Tudor actually encouraged so them to have this sort of playful approach to them to see what they can get out of it, right? So this experimentation through play. There's also an incorporation of amplified video game sounds that were triggered by some of these elements. And the significance that blur lines between composition, performance, and audience participation. So this really gave agency to the audience to sort of make their own composition with the set constraints that were predefined by the artist. Transition to the digital age, it's of course when we talk about gamification we tend to think about the digital age in the video game industry. In this case there was one milestone that is this system called the UPIC, designed by Greek composer Yannis Xenakis in 1977, which is a digital interface that he designed based on his work as an architect. So he used to work in actually compose music in architecture tables. It was part of his sort of approach to compose music. He was assuming this physical position with the interface that he had to compose and then he did a representation of that in digital terms. This would allow people to have a common skill activity which is drawing, drawing shapes into this table that you can see there and the system will translate that into sound. So you could create compositions by just like drawing this sort of comic like, like in a a strip you could create this sort of the world made of shapes and and different traces that will be represented in a musical composition it was very important the concept of flow state which is like sort of very common in this type of interfaces the psychological state of intense focus and engagement so that's the key word right that is something engagement. So that's the key word, right? That is something that happens very often in the video game industry. So this marked the shift towards fully digital interactive musical experiences. Of course, in the digital era, so this is where we have sort of the important pioneers projects. First one was probably, well, it's tricky to define exactly what would be the first one, but this one is definitely a nearly very early one Laurie Anderson to the American multimedia artist she did this piece called puppet motel it was an interactive art city room experience that has nonlinear storytelling so there was this some sort of hub in which you could pick different let's call them levels so ordered environments in which they are like self-contained pieces I brought like one very short example of that shirt in this experiment two people have a conversation then the room is sealed off but the sound waves keep moving back and forth between the walls back and forth growing fainter and more garbled sometimes it can take 50 Right. Right? And the second one is 2010, so a few years later, Paulino Libero partnered up with these people in a person in the avatar orchestra, and they did it in an university in London called Brunel. They did a performance called Rotating Brains Beating Heart. It's this interactive remote performance done with 3D avatars inside of Second Life, which is this, let's call it gamified platform in which it's a multiplayer so you can meet with all the real people represented by digital avatars and you actually have a second life so you can have a job and do other activities just in this digital environment. So they repurpose an existing gamified platform for doing this experimental performance inside of it. And they had a choreography, and it was telematic, so there were people located in different parts of the world, and there was a score, and there was a conductor, and not so it was like an actual proper concert. This is just a brief fragment of that. Thank you.. And of course, COVID-19 was this enormous moment for exploration for this specific type of experiences. There was one in particular by Enrique Tomas, which was part of Ars Electronica four years ago, called the Sound Campus, Metaverse Sound Campus, that was this bear right here. So it was a recreation of the Titanic, and it was also multiplayer, you get to pick the color of your bear. And then you get to explore this exhibition that he curated, in which there were multi-channel pieces, so these cubes, each one of them represent the speaker. And there were also pieces specifically made for Unity, which is the name of the game engine that he incorporated into the experience. It was actually quite successful and demonstrated that there were a lot of people returning to it. It was very heavy to download, which is typically one hindrance, but in this case it didn't represent an obstacle for that. And many people returned to it, which again indicates a high level of engagements. There were also experiments like Paltorowski, which is a colleague from the University of Liverpool, called Comprovisation, musical scores that were meant to be performed at your home in which you could play a game by performing the piece, basically. So it gives you musical pitches and then you will have to advance in the game by performing the piece, basically. So it gives you musical pitches, and then you will have to advance in the game by performing the instructions that the game was giving you. And there is also, of course, in the more mainstream side of things, there was a concert done in Fortnite, which is this platform for, it's a very common, very famous game made by Epic Games, the company that makes Unreal Engine. They make this concert in partnership with Travis Scott called Astronomical. So they again they repurpose the game for like the benefit of musical performance and they had 27.7 million unique viewers which is well it's a lot so physical events don't reach that number to that extent. This is my contribution to the discussion, Ecstasy Light Inertia, which is a digital experience that I did, finished it last year, inside Unreal Game Engine 4 and 5, so it was within that transition. It will be in special audio, Nordic nature, and narrative-driven interaction. So here's the trailer. and narrative-driven interaction. So here's the trailer. From the deepest reaches of Earth to the highest heights we go, I've seen many relics of my past and self in this world, but it is all fractured. Reflections of memories and dreamlike simulacra all woven together with the thread of my strange life. Whatever mysteries this snowy land holds, it is my duty to discover. This world is made of me, after all. I must step into its facets, both terrifying and awe-inspiring. They are nothing new. Just me. Just me. So, as I was mentioning, this narrative that was meant to represent the main tool to engage people in the experience or keep progressing further was like a main part of it. And the general idea was, again, a virtual alternative to a traditional concert set in Sound Art Gallery. So around the world, we are experiencing this phenomenon that people going audiences going to classical music concerts, the average age is racing, it doesn't seem to be a generational change. So this taps directly into that. It was funded by the University of Virginia and there were some industry partners like Genelec, which is a Finnish company that produces speakers and also the government of Finland participated with funding. So about how to disseminate this project, which is like one of the main challenges, and I had like a two-pronged strategy. The first one was utilizing one of the tools that the video game industry has for us, which is Steam. Here is the QR code, and you can download it. This is for free. Since the 16th of July last year, I had 30,000 downloads in 112 countries, which wasn't expected at all because I had a total marketing budget of zero, which is something that happens in academia a lot. You typically get the budget for producing the project, the dissemination of it is events like this, so you're sharing the project, but you don't really have like a budget in the video game industry. They are absolutely massive. They can go upwards in the hundreds of millions. So it was a surprise for me that this actually happened, and it's a testament to how organized the video game industry is when it comes to promoting projects that come from independent developers. Here's the link as well. But I had, as I was mentioning, the strategy was also trying to validate this as an artistic piece that could be presented in a museum, and it's currently being presented in the Han Shinar Museum in Suzhou as part of the Future Archives exhibition, in which it said, so the picture on the upper left, that is a sort of a thing that I did on Unreal as well, the same engine, to demonstrate how I would like to have the piece exhibited in real life, and then they did exactly the same. So curatorial practices in China are extraordinary. So they did a total rebuild of the building in three days with a construction crew working 24-7. It was like a sight to see, so from nothing to the entire exhibition with 14 artists in just three days, which is crazy. So this is currently there ongoing until the 5th of October next year. It was selected as one of the most influential exhibitions of last month by the Akron Index, which is like a prominent publication in China, which made me very happy as well. So it could be presented, of course, the expectations in some interactions are very different. When you don't load a game from Steam, you have interaction expectations. Okay, this is a video game. When you go to a museum and you have also interaction expectations, this is a piece of art. So there are like different parameters, how you approach and engage with the pieces that have been present in each piece of art. So there are like different parameters, how you approach and engage with the pieces that have been present in each one of those. And once I finish this, so the next step will be trying to compile that information and reflect about it. So I have 15 minutes, sorry. So the conclusion is gamification interactive design offer new ways to create engaging and accessible musical experiences, right? Because 112 countries, the COVID-19 pandemic accelerated exploration, but the potential definitely extends beyond crisis situation. As usual, further research and creativity is needed. But the main point is sort of try to promote people to use these interactive trade environments, again, repurposing them, right? So we can use these because there is a massive influx of money poured into the video game industry. We can use that. That could be a viable alternative or supplement traditional concert formats or like sound art formats. Thank you very much. So maybe just a couple of questions since we started a bit late. Hi, I really enjoyed your foray into the history of interactive music. I was wondering if you could help me make the connection between what you were talking about earlier with interactive music and the work that you're doing now? Yes, that's an excellent question. Yeah, so is the incorporation of ludic elements, right? We relate dice to games that have a specific interaction design and a specific interface between people and we also relate cards with the same like playing poker, right? So that is a game that has rules and has a reward system. So there is a person who wins, there is a person who loses. So all of that sort of conjunction of rules is what determines a game, let's say in the more traditional sense. When we pour that into the traditional, in the digital area, so we also find these reward systems. In case of my game, so you had to have these challenges that were actually puzzles that you had to solve in order to progress, and that reward system is getting at the end, so you will get sort of the purpose of the entire journey revealed to you. That is actually, you get to take a decision in which there are two endings that you can select to sort of unfold the entire narrative and discover it for you. So it's the same, it's sort of the set of rules that reach to a satisfying ending. But isn't the earlier work that you showed, isn't that a way of determining the outcome of the music? Right, and so I've seen a number of interactive music pieces and I know there are some challenges to creating some aesthetics, you know, when you give over control to the viewer in regards to the musical outcome. So is there interactive music components in your piece or is it just gameplay and the music is part of the environment? Yeah, that's a very good question. So there is interactive components, so there are installation art nested into the game. So the narrative is what keeps you going forward, but you get to discover these musical spaces that I created, it's similar actually to the puppet motel example. They are like hubs, and then you go to places. Those places represent memories, and then the person is telling you what symbolizes for her but then you get to hear pieces that I've already made and then they are sort of nested in that way, presented in a way it's like these conceptual albums that used to happen in the 80s and the 90s which you have this narrative that will like connect all the pieces together into a single musical discourse, it's that representation but in the digital context so that's sort of what I try to achieve. Okay, thank you for the presentation. Also for the summary of related work, it was great. I was wondering, you said in the conclusions that it's a viable alternative to traditional concerts. Sometimes I go to concerts, like concerts have a huge social aspect as well, but your piece looked like, let's say, I simplified single-player experience, so there is a huge difference, so what do you think about that? That's a super valid point, yeah. So my personal view is it should work as a supplement to the experience but when it comes to Accessibility so especially countries that they cannot like travel here to experience the other we make for instance Then because of income disparity, so they have a fully fledged version of a piece that can experience at home So it has this sort of dual intent to it, right? The social experience happens inside of a steam. So there were a large number of comments, people actually requesting things from me, sharing their pictures, sharing their theories about like what it means, the narrative. So it becomes like sort of a meta social moment that happens after the fact. But yeah, I agree with you. So the social experience have to still happen to some extent. And I think they realize it very well in the Fortnite example with Travis Scott, because people were actually, it was a multiplayer event so people were actually interacting with each other and jumping and talking to each other so that that had that representation of that aspect to it we are a long way still to recreate physical experiences fully in digital environments but I think it does advance toward that sort of giving you the option of what kind of social interaction you want. I need to move on, so thank you very much and of course conversation later. Thank you. And you can do the multiplayer version after the video game industry gives you a big grant, right? Yeah, okay, perfect. Our last but not least speaker is Wolfgang Kuhl. He is the Institute Director and adjunct professor at the Technical University of Munich. Thank you. Ich kann mit dem Film starten. Vielleicht nur mit dem Standbild, sodass ich dann starten kann. Lass mal jetzt kurz unterbrechen. Und wo gehe ich auf Stop? Leertaste. Alles klar. Hello. Last presentation for today. I hope you still have power. It's a pleasure for me to be with you today. And thank you for the invitation. Hopefully I brought something beautiful to you, at least I hope so. So let us start with a short animation. Wo kann ich stoppen? I'm going to make a Thank you for watching. Vielen Dank. Okay. Okay, so silent moment for you all. We all live in hybrid spaces. But how do they look like? Still a problem is the joint representation of spatial temporal data, for example in digital twins or in urban planning. And there is also a lack of a basic ontology for understanding hybrid spaces. Of course we have time maps or activity maps or cognitive maps, but they only focus on one single aspect at a time, space or time or perception. So our space-time model joins all three aspects and will also be able to visualise dynamic topologies of hybrid spaces. Many authors deal with the dialectics of real space and virtual space. And of course there is Milgram's spatial temporal reality virtuality continuum. Both models have advantages and both models show a direction to how to organise the way from real space to virtual space or from virtual space to real space. But space is not equal place, what we see in many projects today. For our model, it turned out to be very useful to change the concept of space into the concept of place. Because media are places. Media can successfully be described as places, or so-called topoi in Greek, such as games, films, or animations, or books. There are places. For our media types, four media types can be distinguished. These are primary media, secondary media, tertiary media, and quaternary media. As you all know, primary are direct communication, secondary media are print media, tertiary media are electronic media, and quaternary media are interactive media. And compared to neutral space, places contain memory and interaction. I think we saw that in many projects today. Places are closely connected to memories and human perception. And you're able to adopt or appropriate places by interaction. That is what we call function in architecture. Just like living, dwelling, working, recreation, education. And places are defined by this human interaction. And the funny thing is, we cannot only appropriate one place at a time through media, we can appropriate many places at a time. And this is what we call multi, what's it called? Multi, multi, multi, I'm sorry. We're coming back to that later Okay, so our space-time model joins three levels. A level of places and media types, a level of perception, and a level of time and interaction. The level of perception is organized through the three worlds by Karl Popper. World three by knowledge and abstractions, world two by consciousness and mental processes, and the physical world, world one. Thus we are able to depict and illustrate topologies of hybrid spaces. Bernhard Tschumi, the architect Bernhard Tschumi, coined the term of topology regarding the dynamic aspects of architectural space. Kaya Tulatz describes the isotropic and non-isotropic space topologies in the work of Felix Deluz and Gurtari. This depiction allows us to recognize specific patterns in the topologies of hybrid spaces. In our model, different topologies of hybrid spaces are distinguished according to four parameters. This is the parameter of range. The parameter of interactions, of response rates and of modality. I'm sorry, there were letters that are missing at the moment. RIA would be the range. That's persons per place, or reached persons per media. Interactions would be the interactions per place, and the response rate is the ratio between the interactions per place and the persons per place. And we have modality. This is a mix of the media types of the four media types. And according to these parameters, we are able to distinguish different topologies of hybrid spaces. This model was tested on the occasion of an art exhibition. So places and media types were mapped in the level of places and media types, range interactions and response rates were recorded, and different topologies of hybrid space could be identified according to these parameters. Most interestingly, the joint representation of real and virtual places revealed that range and urban density turned out to be corresponding concepts. Urban density is a corresponding concept to the range in media. And we could make out relations between the three different levels. For example, an online survey provided a positive correlation between modality and individual activity or creativity between the levels of perception and the levels of places and media types. The funny thing is modality shows interesting relations to transportation and real space. When you choose for a medium, you choose for a certain type of transport, for a certain means of transport. So this turns the view to urban activity to the media side, not to the spatial side. And there is the word I'm missing, multi-locality. This is what could be depicted successfully in the model. You could depict multi-locality of events in the level of places and media types. In the third level we could visualize diagrams of three different parameters in bubble diagrams and they show correlations between the four research parameters. So it turned out to be very useful to use this model and to adopt this model to a certain event and other parameters would be interesting to test, especially in the level of perception when we think of presence or affordance of these psychological phenomena we have when we are working with VR and AR. So maybe you see the animation we saw in the beginning, maybe you see this animation with a different perspective. And you might want to know what the animation from the beginning was showing, so I will get back to the animation again, when it's possible. Wir müssen bis 1,50 Meter. Thank you. I'm going to make a Thank you. Thank you. We have time for a few questions. Yes, please. Anyone want to start out? Okay, well, I have a question just about that visualization. Was the elapsed time of the video significant to the graph? Can you explain that more? Yes. Yes, it is. It's just like a space-time model where you have the time axis as the set axis. And it's a prototype. So it should have been equally as time goes by. But we used a spline animation, and this is not really exact, so this might be the reason why the bubbles are not on the same level sometimes, but that's it. Yeah. And can you explain, I noticed online catalog was in the quaternary section. Online catalog. I wondered if it would be closer to the exhibit. Can you explain why it's in that sort of remote location in your model? Okay. You think that primary to quaternary is a hierarchy to the better or the worse? Not better or worse, but adjacency. Maybe you can just explain more how things ended up in each category. Okay. It's not meant to be more adjacent to the person, but it's just to the reality virtual to continuum. When you start from the virtual side, you have the quaternary media and you end up with the primary media. So the media are sorted to the Milgram's virtuality continuum. Okay, thank you. I understand better. But it's a very good question. Do we have other questions? I know you validated the model on an exhibit. Does it feel like you would get similar results with other exhibitions, or do you imagine it would vary a lot? I think the question is where you want to have to look at, because it's not possible to – maybe it will be possible to present all all parameters at the same time, but it would be confusing in some case. So we decided to just visualize two parameters from the level of time interaction, because with the time axis, you're able to present three parameters. And if you want to have a look at range, for example, you will have to change the point of view. It's just like the dialectics between is it a photon or is it a wave or something. So the model is able to depict many more parameters, but it's the question if it's useful when you detect patterns in it. Thank you. Thank you. All right, thanks. And please join me in thanking all of our presenters today.