This week I have spent most of my time writing my dissertation and working on applications for jobs and masters degrees. As the internship I am applying to just now is in UX, I am going to focus more on the interaction with the piece for the time being.
This means that my first priority shall be switching my input device from mouse and keyboard to mobile phone. I think I have explained my rationale for using this mode of interaction in previous blog posts so I will summarise here by saying that if it functions on a phone/ tablet, it should be portable over the internet for people to experience. Mobile phones with limited interaction is also extremely accessible, and considering I want this piece to be accessible to those without much technology-profficiency, this is a good approach.
After much effort I finally got Unity to recognise my Android phone. I needed to install android studio and get Unity to use the SDK and Gradle that came with Android Studio. I can now use my phone inside of game view but now I need to get the input from my phone. The Unity documentation suggests I use the new Input system instead of Input Manager but I am a little bit skeptical since I got myself into bother by using the URP instead of the old unity renderer.
I found the input controls for mobile devices in the Input Manager package so I am just going to use that. So far I have found the gyroscope settings and I have linked the acceleration to the rotation. I can control the view with my phone now but it doesn't match the exact direction of the phone. I'll need to try and figure this out.
Today was spent refactoring old code. I actually found this really helpful as towards the end of term my code was getting quite messy and fragile. I decided I would go and adjust my UI circle radial select code so it would work with the phone input.
I noticed that before I just used an animator that cycled through a frame animation. I heard this is extremely inefficient so I though I would instead recreate the animation through interpolation in code. There is a library called LeanTwean for unity that handles a lot of the mathematics behind interpolating UI elements.
Tomorrow I'll clean up this code and post a video documenting it.
I wrote some more of the dissertation this morning, I am around 8000 words now so I am back on schedule for reaching 10 000 words a week before the first draft hand-in.
I went in to polish my code for the UI fill bar animation because it was behaving a bit strangely. This ended up being more of a task than I had anticipated but after a good few hours I got it working perfectly. I now have a singleton that keeps hold of what the player is looking at and if the gameobject the are looking at has a "selectable object" tag then it triggers the UI functionality.
Hopefully the time I put in today, is time saved in the future. This functionality should be easy to implement for any objects in the future.
The past few days have been a mixture of dissertation writing, masters applications and studio work. They all seem to be going well. I am about 9000 words into my dissertation, and I hope to be at 10 000 by the end of next week so I can begin to refine it.
Yesterday I went back into Unity and polished up the start menu UI, I let it scale with the phone. I still have buttons for the scenes to make it easier for me to navigate while I'm developing.
I am also redoing the wake scene at the beginning. I am thinking of it more being in terms of relics in a museum rather than items left at a wake. When one looks at an item in the museum they can't help but picture its use in the past in terms of human communities. I don't mean to literally place the audience in a museum, just to frame the items in a similar fashion.
I now have a smooth protoype moving from the launch of the app to the infancy scene. However, I really need to rethink it. A lot of time has passed since I made it and it doesn't match the tone and feel of what I want to create any more.
I am finding working from the beginning of the experience is helping me visualise what the piece is. I was taking a 'blocking-in' approach before but I think I have a good enough idea that I should really just crack on with it. But now I must consider how I am going to adapt this infancy scene.
As I was rethinking my approach to the infancy scene, I was most attracted to the idea of the relation between mentor and child, (before it was mother and child but mentor allowed for more broad approaches in media etc.) being taken over by corporations in the form of technology and media. For example, I am assuming that before the media, children would idolise people they knew directly, whether it was family, a respected member of the community etc. With the advent of film and television, celebrities became idolised by the youth, and now its social media influencers. Now our idols are selected by algorithms.
I would like to explore how these this invisible corporate algorithm feeds content to the child. I am thinking that the audience can be taken through a plasticy colourful landscape on a river of milk (like before) and occasionally stop to watch a puppet show. These puppet shows depict notorious cases of algorithm selected videos going wrong and showing children innapropriate videos.
I think this fits the context much better than the cereal bowl scene did so I am just going to start building it and see how it goes.
I want to get a feel for how the scene may look so I have been experimenting with water shaders.
I followed a Unity tutorial and changed the colours to be more milky. I don't like the look of this, partly because it falls in to the uncanny valley and sets up a standard of realism that I don't like the look of and will be difficult to maintain. So I removed the normal maps, turned down the glossiness and here's the result.
I am going to carry on in this style. I am still not sure what do with the lighting. I don't know wheter to allow for the realistic PBR shader model, or to simplify the shading to banded lighting or lambertian shading, or just to ignore lighting. I will continue to make this scene and then I will experiment a little bit more.
As a quick explanation to the shaders above, the shader is set to transparent, and the camera z-buffer is used to calculate the depth of the water. This is then creates a 0-1 materials that can be mapped to deep and shallow colours. A noise texture attached to time will displace the vertexes up or down, and creates the illusion of waves.
I tried to build to my phone today and it didn't go very well. I get an error about the SDK being not up to date. The solutions online seem quite complicated so my plan of action is just to update Unity to the latest version (2020.2) from 2020.1 to see if they have updated the built-in SDK to save me the bother. Hopefully nothing in my project breaks- I have a back-up if it does- but considering they are very close versions hopefully there is little difference.
Earlier on today, I managed to set up Unity's cloud building service, which will hopefully mean I can make android builds way quicker. I also managed to set up a repository, that I can make much more sense of so that's another postive.
I have no luck with this new version, so I am going to try again with 2020.1. I noticed that my screen-shader no longer functions but I noticed the shader graph has been updated. But its too much of a risk to change my workflow this far in to the project.
Eventually I managed to get Unity to build. The solution to this problem is to run some unity file in the built in JDK using the command prompt and it fixes itself. After that I had no problems with building. So far there appears to be no performance issues, but I am just going to try the more graphically intense scenes now.
I also managed to get the cloud build to run, which is pretty cool because I can build straight from my repository. However it appears to build much slower than my computer does, which means I don't have much use for it.
The desire scene which uses my custom shaders runs well. The project seems to be locked at 30fps but has never dipped below that which is a good sign. My phone has mid-range specs so hopefully this performance should be the norm accross all devices.
Above is what I have of my new childhood scene up to this point. The idea is to place the audience in a 'It's a Small Small World' - type ride where they have children's content selected to them by algorithms. It is a concept we are quite familiar with in Interaction Design but I want to explore it in terms of a child's education being mediated by corporate automation.
I plan on having three puppet shows / some kind of display that performs bad content shown to kids. One will be fairly normal, maybe a bit eery, the next will be sexual, and the third violent. I have read of many cases in which Youtube has shown children content that is bizarrely sexually explicity like Mickey Mouse masturbating in a theatre. There are even more depictions of violence where kids cartoons characters like Paw Patrol commit suicide or are involved in a car crash.
I am also interested in how I can use interactivity or the lack thereof to illustrate a point. The only way to avoid this is to stop watching, the only way to escape this ride is to close it on your phone. Perhaps I can give the player an "are you still watching" message - which is interesting in itself.
I also did a lot of work on shaders. I have decided I am going to avoid using lights in these scenes, partly to give them more of an 'unreal' feel, and due to improving technical performance. To give the objects depth, I made fragments further from the camera render darker.
To emphasise the three dimensions even more without implementing any lighting, I used the directions of the normals to tint the surfaces. This means two faces facing different directions will have slightly different colours. In the gallery below you can see the directional tint and then the tint combined with the other shader data.
I am really pleased with the result of this shader, because it looks like there is lighting applied yet it doesn't make any physical sense. Things appear three dimensional, and there is differences in shading but the materials don't behave in any way like light.
Finally, I made some music for the first performance (the hand). I took the 'Finger Family' song- which is extremely popular on Youtube and added some effects in Audition to make it sound like it was playing out a cheap speaker in a room. Below is the result...
This morning I had a tutorial with Gillian. We discussed the use of sound in my piece and the rules around using other people's work. It sounds like there is a lot of pitfalls in using sounds from Film & TV which isn't great news for me as I planned on having references to them in my work. I can obviously find work arounds, or perhaps I can find certain rules that would allow me to use certain sounds for a certain length of time.
I also asked about the use of other people's 3D models. I would preferably make them all my self but it is extremely time consuming for something that no essential for my piece. My plan is to put other peoples free models (or paid) models in as placeholders, and if time allows it, I'll begin to remodel objects to how I want them exactly to look.
I don't expect to get much studio work done this week as I have a dissetation draft hand-in next Friday. Hopefully I can get a decent draft done by the end of this week so I can work on studio again next week.
This evening, whilst reading Jean Baudrillard: From Marxism to Postmodernism, I came across a paragraph that seemed interesting that I thought I should write down. "the sole efficacious response to power is to give it back what it gives you and that is only possible symbolically through death" - taken from Symbollic Exchange and Death