During this phase, I spent the majority of my time trying to figure out the overview of the experience and how each hardware piece and software feature will fit into this story. I also spent quite a bit of time prototyping some of my hardware with the MakerBot as well. Here, I have a version of the case. At the last phase, I had a prototype that uses a slide on mechanism to secure the phone as well as to allow for changing the back plate. I did not like the way the seam looked in the middle of the phone, and it was difficult to attach.
With the semi-flexible material in the MakerBot, I did a 3 piece case that has a front and back pieces in addition to the texture plate. In this version, I have screws to hold the whole case together. However, after trying the case on a phone, it would be much easier to do a snapfit or a one piece case with a flexible silicon material.
A version in the translucent plastic. I did not like this as much as the details are fuzzed out by the texture of the material.
Here is a list of the components that will need to fit in the case. With the inclusion of the wireless charging coils as well as the LEDs in the sides of the case, the size and thickness of the case may end up being larger than my initial prototypes. The case will draw power from the phone through a microUSB or Lightning port. This will also enable wireless charging on devices that do not have it. The texture plate will have a capacitive touch layer on it to enable some interactions I designed involving the back plate.
I also started to think about the base station. I wanted to a try an a symmetrical texture on the base station as I thought it would be more visually interesting than a tessellating pattern. The main challenge was building the model for print, as it was difficult to do geometry like this in Solidworks. I ended up making the geometry in SketchUp and then thickening the walls in Solidworks for print.
Printing on the MakerBot Replicator 2
I printed the rough sketch at about half the scale. The geometry of the part combined with the translucent material makes the part look very visually interesting with refracting planes of light. However, I want the base station to have the same design language as the case, and I don’t think this more bold geometry will work well as a texture back for the case. The translucency might also be a problem as it should be encasing electronics in it.
For the base station, I am reverting to having an opaque top, but adding a thick acrylic base for it to sit on and for the lights to show through the bottom. The components will include the transmitting end of the wireless charging coils, magnets for holding the phone in place during the charging (can probably replace this with a gentle slope at the top), and an LED matrix for lighting and mixing colours. The key component here will be the ultrasonic sensor as it will allow the base to detect whether there is a real person in front of the station and how far they are.
I tried to build a working prototype of the presence detecting mechanism. Here, the ultrasonic sensor detects (up to 3 meters) depending on how far a subject is. The row of LEDs light up corresponding based on that distance: one represents pretty far and the whole row lights up when it’s really close. However, I was only able to get either the ultrasonic sensor working or the LEDs working, and not both of them together.
On the app sides of things, I tackled the information architecture by sorting the problems I want to answer in a hierarchy of visibility. The inverted triangle has the different categories. The top tier will be what the user will see when the app is opened, the 2nd, 3rd and 4th tier need more taps and are further nested within the interface.
The most important thing in the app is that the user should be able to communicate very very easily (whether through vibrations, calling or messaging). The user should be able to tell at a glance whether their partner is available or not. In the second tier, they should be able to see some overview information about their communications. The third tier is allowing the user to review the communications that they have had, and finally furthest tier will be a lot of individual settings. The privacy of the user is important and it is imperative that they have control over what they want to share.
Here is a rough architecture of the app, with some more detailed screens and some interactions listed. I want the app to be more intuitive about the information that it is collecting and be able to interpret that for the user. Instead of a static UI, I opted for a “cards” style UI, where cards only show up when it is relevant to the situation. For example, in Frame 2.2, a card will show up after a lack of communication for a set amount of time. It will have a few action buttons connected to it as a prompt to action for the user. This card only shows up when it detects a lack of communication, and the little settings icon on the card allows the user to turn off this card or change the length of time it takes for the card to show up.
Another example of the intuitive UI in action is the first card shown in Frame 2. This card is linked to the base station in the partner’s home. It tells the user whether their partner is home or not and if they are available. However, tapping on the card takes the user one level further, where the app is able to interpret this information and provide some prompts appropriate to the situation. In Frame 3, the app knows that the partner is at home and that the user is on the commute home, so a card with the “Send Location” quick action shows up. Users will configure to send these quick actions using their preferred messaging app, and they will able be able to turn off these features on the settings on the specific card.
On the main screen (Frame 2), there are also quick actions such as Call, or Send a Preset Message or Share Content. Users will also be able to replay the vibrational messages that they received.
I did some user testing with my scenarios, and one of the things that my participants pointed out is that being only able to send taps is very monotone and can get boring pretty quickly. They also noted that they are always tapping and holding their phones, so a lot of accidental messages would be sent. A lot of their feedback is reflected in the new interactions. To send a vibrational message, the user will have to shake their phone twice to trigger the mode. They will also be able to send a variety of taps, swipes and caresses (enabled by the capacitive touch layer). In Frame 3, if the recipient phone is not currently active, the case will glow, indicating a new message. Holding or pressing on the case will play the vibrational message. In the case that the user is currently holding and using the phone (Frame 5), a small notification will notify the user about the new message, which will play immediately after.
The users will also be able to tap, swipe and caress during an active phone call.
Another insight I got from my user testing was regarding the interaction that will allow users to establish a spontaneous, unplanned connection. In the pervious iteration of this interaction, the case would vibrate every time both users are holding their phone. From my user testing, a few people noted that they themselves are almost constantly checking their phones or are using their phones to play games or read the news. If the phone buzzed every time they just wanted to do something else, the novelty would wear off fairly quickly and they would get tired of the feature.
The interaction is now a more deliberate gesture. Both users has to put their phones facedown and stroke the back of the phone for 5 seconds simultaneously before the case lights up and vibrates, showing the physical connection.
In addition to charging the phone, the base station will be used to indicate a partner’s presence. When both users are in front of the base station, the base station will then light up.
While the base station won’t light up when there’s only one person in front of their base station, that information will still be taken into consideration. The app will display that information, as well as suggest for actions to take based on it. For example, in Frame 1 and Frame 2, the app knows that you are on the commute home and that your partner is at home, so the subtle call to action is to send your location to your partner. In Frame 3 and 4, it knows that you are both are home, so the suggestion for a shared activity, such as a game.
Now that I have more of an overview of what the products need to be, the next steps are very important. There is only about 6 weeks left until the grad show, so I really need to think about the project in terms of what I want to show at the grad show. Next week, I need to work on the details of the hardware and finalize the final design as well as the assembly of the internal components. I also need to work on the visual design of the app as well as the motion design. However, I think one of the things I need to figure out before hand is the video that I want to shoot and the story that I want to tell using that. Then, I can figure out the interactions that I want to use to tell that story, and then work on the visual and motion design for that.