Interaction Design presentation video Final

to see the earlier (longer) edit of this video, click here

to see the “Rumblebee reactions” video, click here

After making the first video, I realised that there were several flaws with it. Firstly, the video was far too long (although I made it purposely 2 minutes 30 seconds as that was the brief I was given), and if I were to make a video that long again I would certainly film more shots and perhaps add a more involved narrative. Before starting this project I didn’t have any real concept of how long time was in terms of filming, and something that you feel takes a long time to film and will fill a good portion of the video, may in fact turn out to feel very short or have to be shortened in order to fit the tone or rhythm of the film. I have definitely learnt that when filming I should aim to film at least twice as much footage as you think you will need. Another problem was that feedback from people who had watched the first video commented that it could be interpreted as being sexual, which is the absolute last thing I wanted! I expect this is also down to the lengths of some of the stroking shots, and I found that shortening the video, as well as changing the wording of some of the text has helped (I hope) eliminate that problem.

Ideally, I would have liked to film the video again from scratch with the knowledge that I have now, and I found it frustrating having to work with only the shots I took originally and being able to see all the flaws in them (such as the camera being at a slight angle, the footage being grainy due to the lack of light), but at this stage in the project I don’t feel I have the time to be going back and redoing things simply because I’m being picky, and I think the final video itself is of a good enough standard that I can leave it be.

I think the video editing more than anything has been the part of this project that I have taken forward the most, while I would like to have a better range of skills with actually making interactive projects and arduino, and I am still aiming to use arduino in two of my projects, it is the video editing where I have been able to focus the majority of my efforts. I find it really interesting how it gives your object a completely different platform to be engaged with, and rather than having made an object and have others look at or interact with it in order to judge it, they are shown it within a specific context and narrative which you dictate. In this context, the object I have created is viewed in a consumer context, as a product which may be sold rather than a more artistically focused piece. Certainly from the reactions I have had with people interacting with it, it fulfils the purpose that I designed it for, to foster and encourage happy and affectionate interactions between the user and the object, with it being treated almost as a pet or a baby. This is created not only by the shape, but greatly reinforced by the sound, and most people very easily seem to adopt it very quickly. There were those who were very hesitant and confused around it and didn’t know what it was they were “supposed to be doing with it”, but I am inclined to say that these are a demographic of people who are not likely to be playing with cuddly toys and there will always be people who don’t react positively to a product or a piece of work.

Advertisements

Rumblebee reaction video


Video editing (the joys of)

I decided to go back to an earlier piece of work for my field Interaction Design, and re-edit the video I presented in order to make it smoother, now that I know it doesn’t need to be precisely 2:30. As well as re-editing the final video, I decided to also put together a video from clips I took of people’s reactions to the toy in order to judge whether or not it was getting the reaction I wanted.

To watch the videos, click here for the “Rumblebee Reactions” of people’s response to the toy, and click here for the final edit of the presentation video.

While I must say, I do overall really enjoy video editing and it is a new skill that I otherwise would never have touched, it can be extremely frustrating and monotonous at times. I imagine that with experience and learning to use the software better (I am using iMac) then things can probably be done much easier and quicker, but I found lots of things finicky and repetitive. However overall I find the process very satisfying and rewarding, and I would definitely like to make videos of my work again in future.

It’s easy to look at the finished videos and assume that it’s a fairly straight forward process, especially with the reaction video, and that I’ve simply strung a few clips together and put some music to it. Unfortunately it was far from that simple.

uneditedWe begin with an unedited video clip. Many of my clips I now realise were filmed in portrait, because when I was filming (and having had no experience with making videos) it made sense to me to hold the camera in the way which framed the subject best, as you would taking a photograph. I failed to take into account that videos are of course always displayed as a landscape view. This means that almost all of my clips would have to be rotated and cropped. Not only does this cause much more work for me, it comes with issues of reduced quality from the zoomed in shots, and trying to account for the new frame with the crop. I have certainly learnt the value of trying to get the best shots you can while filming, rather than relying on editing.

crop to fill

Once you’ve rotated the video, you then have to crop it, that is unless you want the actual proportion of footage:black screen to be very small. But as I said earlier, this brings up the issue of framing. When filming in portrait I had accounted for the frame, and gotten everything I wanted (reasonably) cleanly in shot. But now with half the height to work with, it makes it much easier for things to move off screen.

crop ken burns start

I did find a feature called the “Ken Burns” crop, which allowed the camera to move/pan from one part of the frame to another  between a starting and ending point. While this was reasonably useful, I would have liked to be able to be more precise, moving the camera up/down/left/right between a shot. I am fairly sure there must be a way of doing this, as it seems like a glaring oversight not to have it, and I expect it’s down to my lack of experience that I can’t figure out how to do it. I did try to break the video clip into sections where I wanted to change camera directions, but I found I couldn’t accurately line up the end point of clip A with the start point of clip B seamlessly, and so it would jolt from one to the next which was more distracting than the framing issue.

attached audio

The main thing I spent my time doing, was editing the audio. When importing an unedited clip, you are presented with this, and the audio file is attached to the video. All that you can do at this point is crop it with the video, and it took me a long time when I was making my presentation video to figure out how to do anything with it.

detach audio

First I had to go to the menu and detach the audio from the sound clip

edited audio

This then places the audio in a separate bar parallel to the video clip, which can then be trimmed, moved and edited in all manner of ways. I spent a lot of time trying to edit myself out of the clips as much as possible, not just because I didn’t like my voice but I didn’t want it to get in the way of people’s reactions. Rather than each clip having a full audio track, I edited it down so that it only had a few chunks of audio, usually where the person is speaking or the toy is making noise.

As well as trimming the tracks to remove myself, background noise was also a big problem. The studio in which we work is a large open floor and therefore often full of noise. I did various things to try and remove this from the audio, including have the “Reduce background noise” option turned up to max, and play with the audio levels.

adjust adudio levelsEditing the audio levels was also an important part of trying to make the video run smoothly. After removing much of the audio on each clip, when the sound did start it often sounded abrupt as it was coming from silence. As well as this there were often parts that I wanted to be louder than others, or louder/quieter than they were in the original recording. For example, in many parts of the film you can hear me laughing, and although I have removed as much of that as possible there were areas where it wasn’t possible to remove it as I am laughing over a noise I want to record, such as somebody speaking or the toy making noise. In this situation I then had to break the audio up into chunks, lowing the volume of my laugh and increasing the volume of the noise I’m trying to capture to make it more clear.

While all of this is tedious, by far the most frustrating part of the editing was the music. iMac had a good number of copyright free music to use on the videos, which I felt suited my theme. However, the issue was trying to make the music loop, as each track only lasted 1 minute. You would imagine it would be as simple as putting the track down a second time, but you need to take into account that each track as a beginning and an end sequence. After removing those sequences it’s then the tedious matter of finding two points where you can join the track together in a reasonably seamless fashion. Again, I imagine that there is an easier way to do this, or perhaps some way to fade in one audio track to another, but I could not find it and it was simply a matter of trail an error, and listening to the same song over, and over, and over again. In fact at the end of two days straight editing I have to say I have a blistering headache.

Another frustrating point was the title overlays, which had surprisingly few options it forced you to pick between. For example, you could have text fade in to the bottom left of the screen, but not the top left. Or certain choices would constrict you to only being able to type in capitals, only in a certain sized font etc. This made it difficult to get a consistent aesthetic across both the videos, but I managed to settle on a typeface and format that worked at least satisfactorily.

Overall though I must say that I am very happy with the results, and I certainly feel that coming back and re-editing the initial video has improved it. I think if I was going to do this project again I would have a much clearer idea of what it was I needed to film in the first place, and the shots which I would need to get, which was the only downside to the re-edit as I was working with the limited shots which I took initially.


Field summary

Interaction Design:

My first field module was interaction design, with our brief being to design an object with a computing element which you interact with without a screen. I found this to be a really interesting brief, although I was inexperienced with creating anything with computing functions I was looking forward to learning these skills. It quickly became evident that the focus of the exercise was much more focused on the design aspect and making a prototype, rather than a finished functional item. This gave me an interesting insight into the design process in terms of a product designer, looking at the target market, creating a video demonstrating it’s function, and looking at methods that can be used to trick a person into thinking that a prototype is fully functional when in fact it is being controlled manually behind the scenes. The video element especially was something I really enjoyed, and I found it really exciting to be able to go out and film my own video of a product and then sit and edit it together. I felt that I really gained a valuable skill that I wouldn’t have otherwise explored thanks to this project, and I think if I were to film a video again I would be far more confident and have a clearer idea of what I am doing. I also plan to go back and re edit my video, as I feel the original one was far longer than is necessary because I was under the impression the video had to be exactly 2:30 seconds long and so spent a lot of time laboriously making it fit that time, only to find that the time constraint was far looser. For my presentation at the end of the project, I was one of the only people in the group to have any form of physical prototype, let alone working, which it was by means of taking a mechanism out of a toy I already owned but which roughly demonstrated the function I wanted it to have. I feel like this has possibly caused me to have set myself a harder task in carrying the project forward, as most people will have just gone on to make a prototype whereas I now have to develop upon that prototype I have already made. My original plans for the project were quite ambitious, having internet connectivity, touch sensors, heaters, and all manner of things, despite having no experience in the area of arduino or knowing what components I would have to purchase. However, looking at things now and with the time I have left for the project, it is looking like I am going to have to narrow my goals and simplify it to just responding to touch with sound, which I feel is admirable enough in itself if I can achieve it. One of my main issues with this is not having the on hand technical tutoring to support me with my ambitious ideas, and I’ve found arduino a very difficult area to just jump into without first understanding the basics of components and coding.

 

Internet of things:

My second field project was the internet of things, which I was very much looking forward to as it is one of my tutor’s (Ingrid Murphy) main passions in her work, and I had heard a lot about it’s possibilities and was again interested at learning how to bring more technical computing skills into my work and hoping this subject would teach me them. However, unfortunately Ingrid herself had been booked very little time for tutoring this subject, and the majority of the tutoring was done by two different tutors, and I personally found very little of what they spoke about to be directly related to “The Internet of Things”. At no point during the project were we ever pushed into formulating ideas for an actual project, and instead it was largely us being shown different technologies such as 3D scanning and printing, Augmented Reality etc, all of which I had already been shown as a Maker student. Because of this, it wasn’t really an eye opening experience to a whole new world of possibilities for me, and even so I feel like a process isn’t a very good starting point for a project? In my opinion a process should be decided upon after having an idea, and while having a wide knowledge base affords you a better choice in options to best express your ideas, and may allow you to think in directions you wouldn’t have otherwise, I don’t think it’s good enough to just say “I’m going to do a project on 3D printing”. Because of having no clear end point to work towards, as well as the fact we had been shown little to no examples (other than by Ingrid) of these technologies and ideas being actually integrated into artistic works, I found it extremely difficult to come up with any ideas for this project at all as I simply had no context in which to work within. There was a strong focus on coding by the main tutor, however again having no end point to work towards I struggled to know what I was aiming to achieve with the coding and it all seemed like a difficult and fruitless effort. The Raspberry Pi was also something that was emphasised a lot, which was something I had heard of but had no personal experience with, and again not knowing what I could possibly purpose it for it seemed like a waste of time and money to purchase one, but then much of the teaching became redundant because it was based around programming a Raspberry Pi.

 

Summary:

In all, while I think the idea of field is an admirable one and one that has the potential to work well, I feel like the execution on the whole is poor and uneven. There seems to be a great disparity between projects, with some having a very high workload and others having very little, some needing physical outcomes and others resulting only in an idea or a group experience. I think what I found most difficult about the field experience was the fact it was spaced out over many weeks, on a Tuesday and Thursday in the middle of the week. While I understand that we are meant to be simultaneously working on our Subject work over this time, I found that it was really impossible to be putting any real focus into more than one project at once. The modules either left me with no spare time at all as I was having to work on my field work over the rest of the week as I was with the Interaction Design project, or then having the opposite with the Internet of Things where I had no work to be getting on with, but the week was broken up so that I couldn’t get into the flow of focusing on my Subject work. I would much prefer if the project was given a dedicated block of time, say 3-4 weeks of time to be solely focused on the field project at hand with tutors available at least 3 out of 5 week days with work to be getting on with on the days without tutor contact. Not only this but I would like a more balanced standard of field subjects to choose from, each with roughly the same amount of work being asked for it so that everyone is working under the same time constraints and producing the same degree of work. I also struggle with the concept of having to be developing our final presentation for our field subject into an ongoing and improved project, as once you begin working seriously on Subject I then find that there is very little time to be revisiting field, and it is too much to be juggling all at once. If I had the choice, field would be condensed into entirely the first term, with each field module being self contained with a finished item being presented at the end of it. This then allows the ideas and experiences from field flow into the subject work more organically, as currently I’ve found field to be more of an interruption and an obstacle to my Subject work, as by the point you get to seriously start working on your Subject in the second term you have already at least settled on an idea which is then difficult to stray away from and incorporate elements of Field into .


Arduino & input/output research

In order to develop my Interaction Design field project, I intend to use arduino in order to give my soft toy actual function and response to human interaction using arduino. However, having had no direct experience or tutoring with arduino this is a reasonably daunting task, although I have been assured that it is achievable.

My first hurdle is deciding what arduino board to purchase. I do in fact have an Intel Galileo which was being handed out at the Maker Faire Rome last year, but I feel like it’s a bit too bulky to put inside a plush toy, as well as the fact I think it’s on the more advanced end of the arduino spectrum (aka I have no idea what I’m doing)

galileo

Intel Galileo

 

Looking at other boards, the LilyPad arduino seems to be targeted at being integrated into textiles, and seems to be the smallest and most lightweight out of all the arduinos. But now we get into the details of which LilyPad arduino to buy. There is the standard LilyPad, the LilyPad Simple, the Lilypad Simple Snap, or the LilyPad USB.

 

LilyPad arduino

LilyPad arduino

The standard LilyPad has the most input/output pins, 14 digital and 6 analogue, although I’m not entirely sure what the difference is between the two and which I will need. The Simple only has 9 input/output pins, 4 of which can also be used as analogue (from what I understand), and “additionally, it has a JST connector and a built in charging circuit for Lithium Polymer batteries”. According to wikipedia:

“JST connectors are commonly used by electronics hobbyists and consumer products for rechargeable battery packs, battery balancers, battery eliminator circuits, and radio controlled servos.”

“A lithium polymer battery, or more correctlylithium-ion polymer battery (abbreviated variously as LiPo, LIP, Li-poly and others), is a rechargeable battery of lithium-ion technology in a pouch format.”

lithiumso what I can assume from this is the Simple has a built in connector for batteries? and it will charge lithium polymer batteries? Whereas the standard LilyPad uses “an external power supply”, which I assume also means batteries? However, the LilyPad SimpleSnap comes with a Lithium polymer battery built in, saving me the trouble of affixing one. Then finally there’s the LilyPad USB, which again seems to be the same as the Simple with only 9 inputs/outputs, but with a built in USB connector “eliminating the need for a separate USB-to-serial adapter. This allows the LilyPad Arduino USB to appear to a connected computer as a mouse and keyboard, in addition to a virtual (CDC) serial / COM port”  Does this mean that in order to connect the other LilyPads to my computer in order to program them, I will need a “USB-to-serial adapter”? Does it make my life easier buying one with a prebuilt USB? I have no idea. I have to assume it can’t be too difficult to connect the usual arduinos to the computer if they don’t all come with prebuilt USB ports.

 

Then there’s the matter of the inputs and outputs themselves.

The main things that I want my toy to be able to do, is respond to touch with sound and movement. For this, I assume I need touch sensors, motors and.. I’m not entirely sure what you would call an output that creates sound? Again, I’m assuming it’s something that is obtainable, but this is entirely assumption based logic. Looking at the arduino  store, there are only buttons and light detectors, neither of which are useful for me. In terms of actuators (outputs) there is a large selection of LED lights, with a few other items. There is a “stepper motor”, which I think is what I need in order to make the plush vibrate, but I don’t know what size/power motor I should be looking at, or whether or not I can program it to go faster/slower.

stepper motor

stepper motor

I’m finding the website’s descriptions of the products very vague in what they actually are/do, for example “This stepper motor is a strong choice for any project.” Is it? That’s great to hear. Shame I don’t know anything about how it actually functions, outside of technical specifications. Wikipedia more helpfully describes it as:

“A stepper motor (or step motor) is a brushless DC electric motor that divides a full rotation into a number of equal steps. The motor’s position can then be commanded to move and hold at one of these steps without any feedback sensor (an open-loop controller), as long as the motor is carefully sized to the application.”

So, it sounds like the sort of thing I’m looking for? I assume (more assumptions) if I put a motor inside a toy it will make it jiggle around? It’s hard for me to picture the real world applications from these very technical descriptions of products.

As for sensors, buttons certainly aren’t what I want. After doing a bit of digging around I managed to find this “Capacitive Sensing Library” page on the arduino website, which from what I gather talks about using any arduino, some wire, resistors and a strip of metal foil such as tinfoil, to create a sensor which picks up on the electricity created by the human body. This sounds perfect for what I’m trying to do, and reasonably straightforward, and it sounds like it should be able to pick up touch through a thin layer of fabric, and potentially detect pressure depending on how sensitively it is set. There is also a youtube video in the article:

However the article does say that it is important to ground the arduino, and I’m not sure how I would go about this with it inside the toy. Although in another video there doesn’t seem to be an issue with grounding, and I’m thinking it might be self contained within the arduino? Here is a more detailed video on how to build a capacitive sensor:

 

Another option is a pressure sensor, which seems to be reasonably small and flexible, so suitable for the soft toy

From what I can gather, this is a resistance sensor rather than a capacitive sensor, measuring the level of resistance created in the circuit. Although I don’t think that makes any difference for my purposes?

 

After some research, I think the output I’m looking for to create sound is called a DAC, Digital to Analogue Converter, meaning that it takes a digital output such as numbers into analogue waveform such as sound. These will create 8-bit audio, which isn’t as limiting in range of sounds as one might first think. I have found an instructables page on setting this up, however it does seem to use up a lot of pins, and this is all on a standard rectangular arduino board. I’m concerned about how this would relate to the smaller and round LilyPad arduino.

 

It’s frustrating as right now all I want to do is get on with making and experimenting, but I don’t have the components to experiment with. But I’m not sure which components I am supposed to be buying, and whether or not I’ll be missing things, as well as the obvious fact that I have no idea how to go about setting these up.


Interaction Design – “Rumblebee” video


Interaction Design: video research

Our final outcome for this project needs to be a 150 second (two and a half minute) video which demonstrates the function of our product, focusing on the interaction between human and interface. With this in mind, I have been looking at some videos which do this. My immediate thought was of the series of neurowear videos which demonstrate their brainwave technology products, as I think they are clear videos which demonstrate the products well, without the use of speech. This is a format I feel would work well for my product, as it keeps the video clean and simple, without being distracted by the narration of the speaker and focusing purely on the object at hand. Using purely text allows you to describe your features briefly and to the point in an almost bullet point fashion, whereas I feel narration gives an expectation of a more in depth description.