SUCCESS!!!

I got VisualTreeHelper.HitTest() to finally work. It turns out that the problem was that the ZIndex of the ellipses was preventing the top-most hit test result to show up properly. Since I constructed the ellipses in descending order, the top-most ZIndex’ed element would be the outermost ring, which makes sense since it was made last and should be on top of all the other ellipses. So if I were testing for the innermost ring, it would keep returning the outermost ring, just because it is big enough to encompass that entire area.

This is easily solved by defining the ZIndex of each ellipse in the XAML file, like this:

Canvas.ZIndex=99

The higher the value, the higher the “layer” the element is in.

This is great because the hit test method I wrote myself was pretty bad and the visual tree helper method is much more accurate. This will improve the application a lot and I feel a little less resentful of the Microsoft Surface API.

more project progress

Yesterday we spent some time crafting the planet tokens for the application. We used acrylic paint on the wooden spheres we bought from Joann’s, and I think they turned out pretty good. I’m not that great with crafts and the traditional medium (I’m a little spoiled by Corel Painter X and Photoshop) so it was an interesting challenge. The gaseous planets were hard to paint because we are doing a weird thing where we paint the fluffy part of it using acrylic, and personally, I think it would be better if we just used colored fluff to begin with, but it’s too late at this point and I think they look good enough.

We are also doing some research on what content to include in the application. I don’t think we really have time to implement the minigames that we planned on initially doing, but we will for sure make the application accept user inputs such as the user’s age and weight, and have the application tell them how old they are according to Planet X years and how much they weight according to Planet X’s gravity.

A mild setback we are having is the fact that the Microsoft Surface in the lab apparently is broken, and the HD’s data cannot be salvaged so far. Unfortunately our latest prototype is stored on that HD, and I didn’t think to retrieve it earlier. The main reason I left it on there is because every time we transfer the files, I have to manually fix the image file paths because they are necessarily hard-coded into the file, due to some obscure Microsoft Surface bug that no one knows how to fix. So it’s frustrating, having to port the files back and forth between my computer, the lab computer, and the Surface itself.

Additionally, I had to make some key changes to the file to make the tags work properly on the Surface (it behaves differently from the simulator, for whatever reason), and I just hope I can remember the changes I made two weeks ago, or else I’m just going to have to figure it through trial and error again.

The lesson here is that it’s important to document everything you do for future reference, in case you need to do it all over again, and to keep backup versions of your files, even if they only work on a specific machine.

So far things are going good but the major inconvenience with this project is the fact that I can’t just work on one computer at a time– I can code on my own machine and superficially debug it so that Visual C# doesn’t see any syntax errors, but I can’t make sure it behaves properly until I get to the lab and run it with the simulator. Even then, I can’t make sure that it works on the actual Surface until I try it on the Surface machine.

I’m on campus for Thanksgiving so I will be using much of the time to work on the project and hopefully restore the files back to the way they were before the Surface malfunctioned.

astronomy project prototype II

Today we presented our second project prototype. At this point I feel more confident about our project’s progress, because we have figured out the majority of the hardest implementation problems and from now on it is mostly a matter of populating the application’s framework with content and refining its features and appearance.

The things we focused on implementing for this prototype were the physical tokens and the basic screen transitions. Currently, the application exhibits most of the fundamental functionalities we described in our earlier prototype session. The planet tokens interact correctly with the surface, and the image of each planet on the surface can be manipulated by touch. The planet will also “activate” when it is over its correct orbit ring, allowing the user to travel to the planet’s surface using the rocket ship and astronaut tokens.

There are still some things that need to be fixed, such as the fact that the planet images persist even when switching modes, but that can be easily done– I actually already have methods that deal with this, but did not have time to figure out where in the program they needed to be invoked.

The main challenges we ran into were retrieving the coordinates of elements on the canvas (like the planets) and writing accurate hit testing methods. We investigated some methods provided by the Surface API, like VisualTreeHelper.HitTest, but did not have much success with them, so we basically had to write these from scratch by determining whether the coordinates of a planet’s image were in a certain range of points, and while this would probably work fine if the hit-test area were a rectangle, because the hit testing region is in the form of an ellipse, it’s not very precise. It would probably be better if the rings were shaped in perfect circles, but that would not be an accurate depiction of the solar system, so I’m not sure which is better.

As for the tokens, we tried using different materials to convey the physical properties of the planets. We added fluffy coverings to Jupiter and the other gaseous planets to imitate their gaseous state. When it comes time to make the actual tokens for the final product, we’ll add the right colors and put weights into the tokens that represent denser planets.

For our next steps we will choose three interesting planets and fully implement an experience for each one, since we don’t have time to do all eight. We are probably going to go with Mercury, Mars, and Jupiter, though I think Saturn might be more interesting than Mercury.

programming with Microsoft Surface

I have been wrestling with some aspects of programming for Surface lately, and it’s concerning to me that a lot of the fundamental features for the astronomy surface project are turning out to be very difficult to implement.

There’s the added inconvenience of not being able to code and test from home. I have a 64 bit Windows 7 PC, and while it is possible to make the Surface SDK work on Windows 7, 64 bit is a problem. I tried patching the MSI file through the instructions at this site, and was actually able to make the SDK install on my machine, but there are still issues with the Surface Simulator. It runs, and can run the Attract application perfectly because I made sure to patch all of the .exe files in simulator directory, but when I try to run a program I’ve coded, the Surface Simulator will crash inexplicably (for another reason unrelated to the fact that it’s on 64 bit mode). I think there is a deeper incompatibility problem that I probably don’t have time to further investigate, so I am going to just code on my own machine and try to debug it painstakingly whenever I’m in the lab.

Anyway, thedifficult features I am trying to make work include finding out the coordinates of a tag on the surface and adding ContactDown methods programmatically. When I try to get the coordinates of a tag (using tag visualizations), I get 0,0 no matter where the tag is on the surface. This really makes no sense at all and I couldn’t think of why this would happen outside of the possibility that the tag visualization itself actually takes up the entire screen and would always been 0,0 because of this.

So I tried the alternative method of handling tags, ContactDown. I couldn’t get this to work either– I tried both adding a Contacts.ContactDown=”someMethod” into the canvas definition in the XAML file, as well as adding it in the CS file by saying thing.ContactDown += someMethod(). When tested, the methods weren’t getting called, even after making obvious contacts on the surface. Our first lab demonstrated the functionality of ContactDown and worked perfectly, so I don’t know what the problem is. All of the samples provided by the Surface API also seem to show that I’m writing the correct code, and because this is such a basic thing, there aren’t many forum posts or FAQs online that describe and resolve this kind of problem.

I also encountered another problem when trying to add Resources to a project. I wanted to add a custom background picture, because that will be important for our project. However, when following the Surface API’S steps closely to add the Resource, I still got build errors where the program claimed that the picture wasn’t labeled as a resource (it clearly was). It turns out there is a bug in Visual Studio 2008 where if you do not specify the full path of the resource file, this issue will arise. I finally was able to get it to work, so hopefully preparing the project prototype for the next project benchmark will be easy from now on.

A TUI from “Heavy Rain”

There’s a great scene in the PS3 game “Heavy Rain” that demonstrates a very futuristic, TUI-like device that the character Norman Jayden, an FBI agent, uses to investigate a serial murder case. The device is called the ARI system, and it displays information in a projector-like way whenever Norman “selects” a thing within his surroundings. It can also envelop him in a sort of virtual world where he can sort through the “files” of the case.

It makes more sense to see it in action, so here is a video of the part in the game where you control Norman and use ARI.

The device consists of a pair of glasses, which alter Norman’s vision so he can see the virtual world of ARI and the visualization of data on objects around him, and a glove, which allows Norman to select things in his surroundings for ARI to analyze. This reminded me of the TUI developed at MIT where the user had color-coded finger tips and wore a projector around their neck. They could gesture to things around them, and the TUI would respond to those gestures accordingly.

Surface solar system TUI prototype

Our TUI will be a Microsoft surface-based application that incorporates physical tokens that interact with the surface, as well as multi-touch interaction with images on the surface. Its purpose is to create an immersive outer space experience for young children who have difficulty understanding the abstract concepts of space and the solar system. We will use metaphors that are intuitive for children so that they can easily navigate the different modes of the application.

Tokens:

1. 8 Planets + Pluto
A model for each of the 8 planets and pluto. The model will be designed to look like the real planet, as well as consist of materials that reflect the physical properties of each planet– for example, the gaseous planets will be made of some light-weight, foamy material, while the heavy, dense planets will be made out of a hard material and weighted.

2. Spaceship
A model spaceship. The user can “travel” to a planet by placing the spaceship on the image of the planet on the surface.

3. Astronaut
A model astronaut. The user can “land” on a planet by placing the astronaut on the image of the planet on the surface.

Modes:

The application will have three levels of depth–

1. Solar System mode

Designed to seem like space, with the sun at the center. The sun will be surrounded by 8 concentric rings that represent the orbits of the 8 planets. The rings will be faint, but visible. In this mode, the user can put the planet tokens on the surface. Upon contact, the planet token will make a “stamp” of that planet on the surface, which can be manipulated across the surface. If the user stamps the planet in one location, and they place the planet token in another location, the planet will jump to the new location (multiple copies of planets will not appear).

If the user successfully places the planet on the correct ring around the sun, that ring will light up, and the planet will be locked into orbit.

If all 8 of the planets are successfully placed, the model will look like this.

2. Planet orbit mode

To get to this mode, the user needs to place the spaceship token on the image of the planet on the surface. The user can switch between which planet they are viewing by putting the corresponding planet token on the surface while in this mode– that way, they can quickly transition between planets without needing to switch modes. In this mode, the user can see details about the planet’s moons and rotation period, as well as get a close-up view of the planet. They can interact with any of the images on the surface at this time, which will respond with some kind of animation or sound clip or text box.

3. Planet surface mode

To get to this mode, the user needs to put the astronaut token on the planet, whether they are in planet mode or in solar system mode. Again, they can switch which planet they are looking at by using planet tokens in this mode. To return to solar system mode, they need to take the astronaut token off the surface. In this mode, the user is given a view of the surface of the planet as if they were on the planet, and can see information on the temperature, climate, and chemical composition of the planet.

Ben Schneiderman talk

The talk was about how to represent data in a way that makes discovering new information and making interesting observations about it easier for people. I felt like it was particularly relevant to me because I’m more of a visual learner. When I see a huge wall of text or code or a series of mathematical symbols, my mind tends to go blank and I can’t quickly draw any interesting conclusions on the information when it’s in that form. Spreadsheets are really the worst way to show data. But when it’s represented visually with colors and size and proximity, it’s much easier to recognize patterns and anomalies that might be interesting.

In retrospect, using some method of representing data visually would have made my summer research experience much easier. The research I did involved downloading Google Trends spreadsheets for over 800 congressional candidates, and while I did it using code, there was basically no way of catching mistakes or anomalies outside of looking through the spreadsheet individually. I obviously did not have time to do that, so when I wrote scripts to process those spreadsheets, the only way I knew if the data was downloaded incorrectly was whenever the code broke in a specific way.

Hopefully, our solar system TUI will be able to represent information in a tangible and visual way instead of just forcing the user to read a lot of text or confronting the user with a lot of facts in list form.