Category Archives: DAT301 – Real-Time

DAT301 – Mind’s Eye – Technical Post

Mind’s Eye was a very technically challenging project to complete. We started with an general understanding of some of the core principles that would be fundamental but we had a long way to go.

As the main programmer of the group I focused on these problems, here is an overview of the main challenges that we faced.

Seeds

One of the technical challenges we had to face was to run the same visualisation across all displays and computers. Fortunately random numbers in computing are not random at all, just pseudo-random and therefore predictable and repeatable. Random number generators tend to use a ‘seed’ value which acts as the starting point for the numbers generated. Some of these generators let you set the seed value and so providing all the clients fetched the same number of random numbers, they will result in the same sequence. By sending the same seeds to the clients and having the clients run the same code the random effects generated will be identical. We set the seed to the pitch of the sound being visualised which had a good effect.

Framerate

Another problem was that if the framerate of one of the clients dropped, it would fall behind the others. Because the position is updated each frame a client dropping frames like this the particles would appear to be moving more slowly. Processing provides a variable called ‘frameRate’ which is the number of frames per second. This variable is calculated over a number of frames so if the framerate has dropped it is already too late to do anything about it. We could raise the target framerate to catch up but it would be likely that since it’s currently struggling to reach the existing framerate it could be a while before it would be possible to catch up so there would still be significant jumping.

The way around this is to separate the code for controlling the position and the code for drawing it and not drawing anything when the frame rate is low. This leads to another problem of flickering. The frames actually drawn are all correct and the particles are moving at the correct speeds but we needed a way to avoid this. The solution was to create a timer each frame then only draw particles if they fall within the available time budget for the frame, this way only some particles will be skipped and the position calculations can carry on at 30 frames per second.

Network

To display on multiple computers we needed to share data via a network connection. To achieve this we had two options. The first was to send data for each particle between the computers, the problem with this is that we wanted a large number of particles to be displayed at once and this would create significant network traffic between them. The other solution was to make use of the random seeds and only send a message for each ‘event’ and let each computer work out where the particles should be.

Originally we used IBM’s MQTT but having been designed for robust, high latency messaging we found it too slow for our purpose of visualising live sound at least with the implementations available to us. Instead we opted to use processing’s net library to send events via TCP. The alternative would have been to use UDP which is faster but unlike TCP it is not as reliable and does not ensure delivery. Each message was made up of a serialised JSON object containing various properties for the event including pitch, amplitude, it’s position in virtual space and an unique identification for the client.

Timers

In this visualisation we needed to run different bits of code at different intervals and keep track of the age of things so a timer object would be necessary. In processing we can use the default Java timers based on the system clock however since it had to run completely identically to other clients even a small framerate delay could seriously impact the timing and cause it to desynchronise. So the only way to ensure that it was entirely deterministic was to count the number of frames and work from that.

Detecting Sound

To detect sound we used the Minim library and its FFT class. We detect the pitch with the highest amplitude and add it to an array of a tweak-able length, this smooths the output. At a regular interval it checks if it has exceeded a threshold and if it has then create an event based from it.

Visualisations

Each visualisation object is generated from an event sent from the server. The visualisation object then creates particle objects moving from its origin, once a particle reaches the end of its life it is removed and replaced unless the visualisation itself has passed its own duration and is finishing. It does this so that if a visualisation finishes it doesn’t suddenly remove all its particles, it instead just waits for all the existing ones to expire before telling the code cycling through the visualisations that it is ready to be removed.

The code has a ‘Visualisation’ class allows for multiple types of visualisation to be created, it is passed the data for the visualisation and the type of visualisation then acts as a bridge to and from the specific visualisation.

Direction

For the visualisation to be effective effects had to be displayed as close to the source of the sound as possible. To do this the clients send their detected sound events to a central server which compares incoming events against each other and attempts to find cases where the pitch matches, assuming that two sounds at the same time and same pitch have the same source. It then uses the amplitude of the the two sound events with linear interpolation to determine the position between them. The server then removes the original events that matched from an array and adds a new one with the calculated position. After all of this it cycles through the events and broadcasts them to the clients.

Mapping the Virtual to the Physical

To map the virtual space to the physical and allow the natural movement of things from one screen to another there were a number of challenges.

Before even working on cross screen support we first worked on cross window support as for full portability it had to be possible to run the client in a window also allowing for multiple clients to run on a single computer. The sketch makes use of the scale and translate functions to zoom in and show the correct portion of the full virtual space.

For mapping multiple different physical screens to the virtual space the sketch has to first calculate the pixel density of the screen, unfortunately this isn’t possible to find out in software reliably. To work around this the user must set the diagonal size of the screen in inches, inches are used for this just because screens are still most commonly measured in them. The screen’s aspect ratio is calculated from its resolution and with the monitor size the physical width and height of the monitor and then the pixel density can be calculated. With the pixel density the sketch scales so that everything is the same physical size independent of display. Along with this the user must also set the screen’s offset from another which is measured in metres and the microphone offsets for determining the source of the sound relative to the screen. Out of this came a scale of 10 pixels per centimetre.

Tying it back into physical space in this way had some strange effects such as it being possible to measure network delay in centimetres. It is strange that we seem to have disconnected from physical dimensions in computer interfaces and environments.

 

DAT301 – Human Computer Interaction

The way that humans interact with computers is constantly changing. It seems that this change is usually spurred on by advances in technology as opposed to changes in thinking. An example is the transition from the computer mouse to touch screen devices. The first mouse prototype was developed by Douglas Engelbart in 1963 and featured in his 1968 demonstration which has come to be known as ‘The Mother of All Demos’.

Later designs incorporated a ball with the wheels against it rather than the wheels directly contacting with the surface as in Engelbart’s design. Over time mechanical mice have been superseded by optical mice which make use of photodiodes to detect movement. With miniaturisation of computers themselves came laptops which made it necessary to integrate a small pointing device into them, to do this touch pads are used. With the PDAs and early tablet computers such as the Intel Web Tablet a stylus was used for pointing. With resistive and now capacitive touch screens users need only touch the screen to interact with a device. People can now use gestures giving a wider scope for control and making it more intuitive. With technology like Microsoft’s Kinect and Leap Motion, it is conceivable that touch screens may too become obsolete.

Throughout all of this development the basic idea of controlling a specific point in a virtual space and then carrying out an action based on what that area of screen represents. The clicking on an icon to start a program, the icon isn’t the program it is merely an image on a screen yet through interfaces we create these object based metaphors. To access something we want, we us in the case of a mouse an agent that represents our focus.

DAT301 – Kinect

Microsoft’s Kinect offers exciting opportunities for digital artists, it has made previously expensive technology such as motion capture and 3D scanning accessible, paving the way for creative implementations.

After being demonstrated both the motion capture and 3D scanning aspects of the device and various pieces of software available to enable and utilise this we were eager to see what we could come up with.

After borrowing one of the Kinects the first thing we tried was using the new Kinect support for the sandbox game Garry’s Mod which had been added the previous day. With it we were able to control a character in the game in real time and with the character interact with objects within the game. Using the projector in the ‘Dat Cave’ made it easy for the person standing in front of the Kinect to see how their movements were appearing. As well as trying different movements we experimented with kicking and pushing objects.

After experimenting with direct representation based mainly on the motion capture aspect of the Kinect we later experimented with Processing.

DAT301 – Kinect

Microsoft’s Kinect offers exciting opportunities for digital artists, it has made previously expensive technology such as motion capture and 3D scanning accessible, paving the way for creative implementations.

After being demonstrated both the motion capture and 3D scanning aspects of the device and various pieces of software available to enable and utilise this we were eager to see what we could come up with.

After borrowing one of the Kinects the first thing we tried was using the new Kinect support for the sandbox game Garry’s Mod which had been added the previous day. With it we were able to control a character in the game in real time and with the character interact with objects within the game. Using the projector in the ‘Dat Cave’ made it easy for the person standing in front of the Kinect to see how their movements were appearing. As well as trying different movements we experimented with kicking and pushing objects.

After experimenting with direct representation based mainly on the motion capture aspect of the Kinect we later experimented with Processing.

DAT301 – Bioresponsive Game

We were demonstrated and introduced to a number of biometric recording devices including a brain wave measuring headset called a MindWave and a biofeedback sensor kit which had  several different sensors. Based on this we had to create an interactive project making use of this technology.

Writing this after the presentation, it’s fair to say that none of the group were happy with the outcome. We felt that the main problem was the lack of a solid and interesting concept and appealing aesthetics that related to it.

We decided for the sake of variety to avoid Arduino and were attracted to the idea of producing a game that was affected by the level of stress experienced by the player. To get an early start I proposed porting over a game I made in flash in the first year (the poorly named Klepto1), into Processing as a base to build of off. I was more concerned with the technical detail of the programming with extensive use of object orientated programming and making it flexible for the group to use. After this we left completing the remainder of the project too late and had to rush, dropping the use of live data and then there was also confusion over what pre-recorded data we were using. We believe we can avoid this happening again.

DAT301 – Timer

Here is the simple Timer class that I mentioned in the previous post.

// Timer
// author Jo Redwood jred.co.uk

class Timer{
  int period; // milliseconds
  int time;
  int lastTime = 1;
  int timer;
  boolean active = false;
  boolean running = true;

  Timer(int newPeriod){
    period = newPeriod;
  }

  void update(){
    if(running){
      time = millis();
      if(time - lastTime >= period){
        lastTime = time;
        active = true;
        timer = 0;
      }else{
        active = false;
      }
    }
  }

  void setPeriod(int newPeriod){
    period = newPeriod;
  }

  // Resumes the timer
  void start(){
    running = true;
  }

  // Pauses the timer
  void stop(){
    running = false;
  }

  boolean active(){
    return active;
  }
}

Usage:

Timer timer;
void setup(){
  timer = new Timer(500);
}
void draw(){
  timer.update();
  if(timer.active()){
    // Do stuff
  }
}