Tuesday 4 July 2023

Alpha design

 

Description:

For the first version of Dynamic Landscapes, I plan to simply have the musician be able to move about the virtual space with their voice. Controls consist of:
  • Long note: move forward for the duration of the note
  • 2 short notes: rotate the direction of the player dependent on the width of the interval
  • Dynamics: louder notes will result in faster movements, quieter making them slower



Code Structure:

Scripts:
  • Player_Move
    • Set_Speed(int)
      • set up the speed 
    • Move_Forward()
    • Rotate(int)
  • Voice_Input
    • Read_Volume()
      • Read the volume of the input and alter the player speed in Player_Move
    • Interpret_Input()
      • Read input length and pitch to be able to move player
    • Move_Player()
      • Use Interpreted data to move player accordingly
  • Game_Manager
    • Start_Game()
      • reset all variables and player position to start game
    • Stop_Game()
      • exit playing start

Objectives for coding:

  1. Set up simple landscape using unity shapes
  2. Move character round space using keyboard
  3. Read microphone input
  4. Recognize long notes and short notes
  5. Move forward with long notes
  6. Link speed to dynamics
  7. Rotate with short note
  8. Change degree of rotation based on interval
  9. Create a more interesting landscape

Websites that seem helpful for coding:








Monday 3 July 2023

Consolidating Project Vision

Project Description

Artistic Vision:

Dynamic Landscapes will be a VR project. The musician will use a VR headset to view a digital landscape, e.g. a sunny field or a dingy forest. Here they will use their voice to move around the space; long notes will move the person forward, and pitch intervals will rotate the person's perspective left or right a different degrees depending on the interval size. The musician will use their voice to navigate the space, exploring whether different settings or game goals create different genres of music.

I will create this using Unity3D with C# scripts analysing the spectrum data to get information about the notes sung.

A possible way to perform the score would be in a theatre/performance space, having the musician wearing the VR headset on the stage and a large screen/projector displaying their view of the virtual landscape in real-time behind them.

Transforming Creativity:

This project takes a virtual landscape and uses that as a musical score, conveying the feelings of the music it wants to produce, while letting each performance be different, as the musician chooses which way they want to go and what sounds they want to produce.

A composer can create a new landscape to communicate a new musical idea, an atmosphere for the piece, while the musician improvises and plays with that theme. I think this can help transform how music is seen as a creative idea, rather than the composer setting out the notes and the musician choosing how to emote with them, the musician is choosing the notes and composer setting the atmosphere which can create new unique pieces.

Inclusivity:

This score allows anyone to create music, without the barriers of needing an musical education to understand a traditional western score. Anyone can compose by drawing out a landscape and have that translated into a 3D model of their world for people to perform. Having simply described sounds made by a person means anyone able to make vocal noise can play the piece without vocal training.

Drawbacks to this project include that it is visually based. Someone with limited/no sight could not perform one of these scores, though they could compose by describing a scene to someone/a generative AI, and hear another musician perform the piece.

Technological novelty and approach:

I will be using Unity to create a VR app, either creating 3D models myself using Blender or obtaining assets online. I will then write scripts to detail object behaviour using C#, taking in audio input and analysing it using the spectrum data to work out the characteristics of the notes and translating them into movement.

As a stretch goal, to add more agency to the composer over the piece, I will create goals/mini games the musician can choose to complete, to guide their exploration and subsequent music create in a certain direction.

Aims & Objectives

Aim:

-          To seek new knowledge in how a VR game can be used by a composer and musician to create a digital music score

Objectives:

  • Create a VR project where the musician can move about the space using their voice.
    • The program will display a virtual world
      • The program will display multiple worlds, movements
      • The player will be moved to a new world after a goal is completed or a certain amount of time has elapsed
    • The program will pick up vocal input.
      • The program will recognize a note’s duration, volume and pitch.
    • The program will move the player based on the vocal input.
      • The program will move the character forward for the duration of long notes.
      • The program will rotate the player based on the interval width of 2 short notes.
      • The program will alter the speed based on the volume of the input.        
  • The player may choose to complete objectives inside the landscapes
  • The score will be iteratively tested on an external musician, collecting feedback to improve
  • A new phase will be designed with the musician’s feedback
  • The score will be performed at the end of development
  • The composer's experience will be recorded
  • The musician's experience will be recorded


Methodology

I will use an agile methodology during this project, continuously bringing in a musician to use the technology, evaluate their experience, and take feedback to adapt and alter the project. This will help bring new points of view to the project, stopping a tunnel approach from happening if I only stick to my ideas. It will also help give the app a user focus, making sure it's understandable from the beginning, not just as an afterthought, analysing how much digital knowledge is needed to interpret the score.

The structure of each phase (alpha, beta, final) will be as follows:

1.       Design

a.      The design stage starts with (if applicable) looking at feedback from the last demo with the musician. I will analyse what went well and what needs to be improved, translating these into new features or upgrades to add to the next implementation. These may include ideas to make the score more interesting or interactive and ways to improve usability to make controls more understandable or intuitive. If in the first stage, instead of getting features from the previous demo,  I will come up with the original features I see fit for the first design.

b.      Next, these new features/upgrades will be realised through diagrams and plans. I will plan out how the features will be integrated into the previous design, adding to or redesigning the last iteration’s diagrams. After this stage, I will have a full idea of what I want the project to look like at the next demo – the end of this iteration.

c.      After is the research section. Here I will look into the new techniques I plan to use and find useful resources to help me. I may also look for assets if the new features include editing/creating new digital landscapes.

d.       Lastly, I will write up a step-by-step plan of what order I plan to implement the new features.

2.       Code

a.      The coding stage includes coding the project, surprise, surprise. Following each of the steps decided on at the end of the design phase.

b.       If all steps are completed, I will start working on stretch goals before the next demo.

3.       Demo

a.      In the demo stage, the musician is brought in to use perform the score. They will be using the most recent working version of the project, if new elements are in the middle of being implemented, it will revert back to the previous commit of working code.

b.      The musician will be filmed playing the score and interviewed afterwards to gather feedback for the next design.

c.      The interviews will be semi-structured, more of a conversation between the interviewer and me.

d.      They will also be asked to perform a Stimulated Recall Method (SRM). The SRM involves replaying the piece sung by the musician during the demo as the musician dictates their stream of consciousness throughout the process, recounting the decisions they made and any meaningful experience they had.

e.       This concludes the demo stage and will cycle back to design.


Insights & Conclusion

- At the start of this project, my main idea was to make a game based on exploration. The player would sing to move about a world, collectin...