How to set up a MiRTLE Server

April 30, 2009

One of the projects that we undertook last year was MiRTLE: it was a collaborative research project between Sun Labs and the University of Essex in the UK to develop a Mixed Reality Teaching & Learning Environment. We finished the project last autumn and I’ve been writing up a Sun Labs technical report–hopefully to be published in the near future.

One outcome of the project is that we’ve made available all the files necessary to host your own MiRTLE set up. In the instructions that follow, I assume that you’ve already taken a look at Jon’s excellent blog posting on setting up a Wonderland server on Solaris. The remainder of this blog posting is adapted from his earlier entry.

I start out by logging in to my OpenSolaris server (named ‘opensolaris’) and creating a directory to contain the Wonderland code. Since this particular setup is for MiRTLE, I’ll call the directory "mirtle":

[opensolaris ~]$ su
[opensolaris ~]# mkdir -p /export/home/wonderland/mirtle
[opensolaris ~]# chown -R bh37721 /export/home/wonderland/mirtle
[opensolaris ~]# exit
[opensolaris ~]$ cd /export/home/wonderland/mirtle

In this case, I’m creating a directory in /export/home, and changing it to be owned by my user. A better option would be to create a new user for MiRTLE, and perform the installation as that user. In either case, you can install in any directory for which you have write permissions. You’ll just need to update the paths for the WFS root in the instructions below.

(Linux note: most Linux distros don’t let you directly do "su". Instead, replace the command performed as root with a call to "sudo", for example: "sudo mkdir -p …").

Next, I download the pieces of software I will need.  First, I get the 0.4 release of the Wonderland server for Solaris:

[opensolaris mirtle]$ wget
13:58:56 (256.45 KB/s) - `' saved [145402562/145402562]

Next I download the MiRTLE.war file from the wonderland-incubator web site (and rename it):

[opensolaris mirtle]$  wget "*checkout*/wonderland-incubator/MiRTLE/MiRTLE.war?rev=HEAD"
14:14:13 (208.04 KB/s) - `MiRTLE.war?rev=HEAD' saved [163848730]
[opensolaris mirtle]$  mv MiRTLE.war\?rev\=HEAD MiRTLE.war

Next I download the zipped WFS files that provide the MiRTLE world from the same site:

[opensolaris mirtle]$ wget ""
14:18:57 (65.56 KB/s) - `' saved [23269/23269]

Next I download the zipped files that provide the MiRTLE shared applications:

[opensolaris mirtle]$ wget ""
14:20:09 (234.06 KB/s) - `' saved [2752564/2752564]

Finally, I downloaded the Glassfish application server, which I will use to host the web administration:

[opensolaris mirtle]$ wget
14:25:23 (257.37 KB/s) - `glassfish-installer-v2ur2-b04-sunos_x86.jar' saved [64375686/64375686]

Now I have all the pieces I need:

[opensolaris mirtle]$ ls -l
-rw-r--r-- 1 bh37721 staff  64375686 2008-04-12 04:10 glassfish-installer-v2ur2-b04-sunos_x86.jar
-rw-r--r-- 1 bh37721 staff   2752564 2009-04-20 09:10
-rw-r--r-- 1 bh37721 staff     23269 2009-04-17 17:44
-rw-r--r-- 1 bh37721 staff 163848730 2009-03-03 14:14 MiRTLE.war
-rw-r--r-- 1 bh37721 staff 145402562 2008-08-16 00:26

Install the Wonderland server

Next step is to set up the Wonderland server. First I unzip it:

[opensolaris mirtle]$ unzip
creating: lg3d-wonderland/

The next step is to include the MiRTLE shared applications. I unzip the additional MiRTLE files and copy them into the appropriate directories in the Wonderland server directories:

[opensolaris mirtle]$ unzip
creating: MiRTLE-sharedApps/
[opensolaris demo]$ cd MiRTLE-sharedApps
[opensolaris MiRTLE-sharedApps]$ cp -r apps-linux/ ../lg3d-wonderland/config/
[opensolaris MiRTLE-sharedApps]$ cp -r apps-solaris/ ../lg3d-wonderland/config/
[opensolaris MiRTLE-sharedApps]$ cp *.zip ../lg3d-wonderland/data/Wonderland/test/appshare/
[opensolaris MiRTLE-sharedApps]$ cp ffsetup ../lg3d-wonderland/bin

This setup relies on installations of FireFox and OpenOffice being in their "usual" places, i.e.

  • /usr/bin/firefox
  • /usr/bin/soffice

If their paths are different , then you’ll have to edit lg3d-wonderland/config/apps-solaris/apps-smc.xml.

I also download the MiRTLE overview audio file into the correct directory (and rename it):

[opensolaris MiRTLE-sharedApps]$  cd ../lg3d-wonderland/audio/
[opensolaris audio]$ mkdir mirtle
[opensolaris audio]$ cd mirtle
[opensolaris mirtle]$ wget "*checkout*/wonderland-incubator/MiRTLE/audio/mirtle/"
15:04:21 (200.26 KB/s) - `' saved [61034424]
[opensolaris mirtle]$ mv

The next step is to unzip the MiRTLE WFS files:

[opensolaris mirtle] cd ../../..
[opensolaris mirtle]$ unzip
creating: mirtle-wfs/

The next step is to edit the files with the right values for my server:

[opensolaris mirtle]$ cd lg3d-wonderland
[opensolaris lg3d-wonderland]$ vi
# Set the hostname to be used for outbound socket connections.
# Java finds it hard to figure this out automatically. This is used by the voice bridge
# and the X app sharing s/w for making outbound socket connections.

The values I change are:

  • local.hostaddress – the address of the Wonderland server, which is the same as the address of the web server in this case. I discovered this by using ifconfig -a. Alternatively, you can use a fully qualified host name.
  • wfs.root – the location of wfs files to load in this world. The MiRTLE world is a describes the contents of the class rooms, with content like PDFs loaded from the internet.
  • art.url.base – where the clients will download artwork from.In this setup, clients download the artwork from the deployed MiRTLE.war file.

The server is now configured! To make life easier, Jon wrote a little script to launch all the separate pieces of the server. I put this in the lg3d-wonderland/bin directory:

[opensolaris lg3d-wonderland]$ vi bin/
echo "Starting Voice Bridge"
./bin/ > wonderland-bridge.log 2>&1 &
sleep 15
echo "Starting Wonderland Server"
./bin/ > wonderland-server.log 2>&1 &
sleep 15
echo "Starting Server Master Client"
./bin/ > wonderland-smc.log 2>&1 &
echo "Wonderland started"

To run the server, I just need to make the script executable, and then run it. It will automatically put all the Wonderland server processes in the background:

[opensolaris lg3d-wonderland]$ chmod +x ./bin/
[opensolaris lg3d-wonderland]$ ./bin/
Starting Voice Bridge
Starting Wonderland Server
Starting Server Master Client
Wonderland started

Once the server is running, you can check the log files in the lg3d-wonderland directory if anything doesn’t work right:

[opensolaris lg3d-wonderland]$ ls *.log
wonderland-bridge.log  wonderland-smc.log

Install Wonderland Web Admin in Glassfish

Now that the server is all set, it’s time to turn to the client. I want to launch the client via Java Web Start, so users can just click on a link to run a client to MiRTLE. To do this, I need to deploy MiRTLE.war to a web container.  I’m going to use the Glassfish container I downloaded earlier, although you should be able to use Tomcat or Jetty if you prefer.

Installing Glassfish can be done in two easy steps. Step one is to unpack the distribution file:

[opensolaris mirtle]$ java -mx768M -jar glassfish-installer-v2ur2-b04-sunos_x86.jar
Accept or Decline? [A,D,a,d] A
installation complete

Step two is to run the setup script to configure Glassfish. The Glassfish site has great docs about all the settings you can change (like directories and port numbers) in the setup.xml file. Make these changes before running the next step. I just use the default, which will put the web server on port 8080:

[opensolaris mirtle]$ cd glassfish
[opensolaris glassfish]$ chmod -R +x lib/ant/bin
[opensolaris glassfish]$ ./lib/ant/bin/ant -f setup.xml
Buildfile: setup.xml
Total time: 31 seconds

Now I start up Glassfish:

[opensolaris glassfish]$ ./bin/asadmin start-domain
Starting Domain domain1, please wait.

And finally, I deploy the updated MiRTLE.war file:

[opensolaris glassfish]$ ./bin/asadmin deploy ../MiRTLE.war
Command deploy executed successfully.

Now the Wonderland client code is installed in the Glassfish server. If there are any problems, you can check the Glassfish server log:

[opensolaris glassfish]$ ls domains/domain1/logs/
jvm.log     server.log

Running the MiRTLE Client

OK, everything should be up and running at this point.  From my browser, I can go to

And I should get the MiRTLE launch page:

Setting up a MiRTLE Server

Take a look at our original video to see how MiRTLE is being used.


Acquisition concerns: response from Karl Haberl

April 24, 2009

In the comments from our previous blog entry, Morris Ford raised the question of whether Wonderland would be "closed up and made proprietary."  In response to those concerns and others we have heard in both the Wonderland and Darkstar communities, Karl Haberl wrote the following guest blog.

Hi folks,

Karl Haberl here. For those that don’t know me, I’m the Director at Sun who looks after the business and management needs of Project Wonderland and Project Darkstar. I thought I would add a few comments.

As Nicole indicates, we are in a period when Sun is still operating as an independent company while all of the acquisition-related activities proceed. So for us it is (and must be) business as usual. The Wonderland development team is working on 0.5 with as much enthusiasm and excitement as ever! And we have no intentions of stopping there when it is released.

Nicole provided some links to publicly available information regarding the proposed acquisition. In the FAQ it says "Oracle’s acquisition of Sun is consistent with our strategy to provide complete, open and integrated systems."  Also: "Java is one of the computer industry’s best known brands and most widely deployed technologies. Oracle Fusion Middleware is built on top of Sun’s Java language and software. Oracle can now ensure continued innovation and investment in Java technology for the benefit of customers and the Java community."

When I look at the work we are doing I am encouraged. We are building a scalable enterprise-grade virtual world platform. It’s an integrated stack – all open source, all written entirely in Java, and backed by a persistent data store (in Darkstar) that uses Oracle’s Berkeley DB database. Wonderland-based implementations in the enterprise will require integrations with systems for authentication, identity management, directories, documents, and so on. Feels like a good fit to me!  But that’s just my personal opinion, not Sun’s or Oracle’s.  You should read the publicly available materials and draw your own conclusions. Who knows?

I would like to point out one last thing. To date, all of the code for Wonderland and Darkstar has been released under the GPL license. Check out these references:

My personal interpretation (realize that I am not a lawyer and I am not officially speaking for Sun or Oracle) is that, regardless of any decisions that Sun or Oracle might take with respect to development and licensing of future versions, the greater community – meaning all of you, not just the Sun team – would have the right to continue to develop and use the technology that has been released to date in compliance with the applicable license terms. I encourage you to read the references above and draw your own conclusions.

In time, things will become more clear for all of us. For now, we continue as we were. Cool technologies and a great community of talented people. My thanks to all of you for your passion, energy, and dedication!

Karl Haberl

Sun / Oracle News

April 21, 2009

On the Sun Immersion Special Interest Group, Matt Schmidt from the iSocial team posted this question, which I’m sure is one that many others may have as well:

Oracle buys Sun… How does this impact Wonderland development?
Posted by Matthew Schmidt on April 20, 2009

Got the following in my inbox and read it elsewhere on the intertubes. Is anyone else curious about how this might impact Wonderland development?

We’re getting ready to move into a significant scale deployment scenario, so we’re very interested in whatever information or insight people might have.


You can read the official press release here, and lots of other information just about everywhere.

Those of us on Sun Labs Wonderland team are still absorbing the news, but let me share a few preliminary comments. First, Sun continues to be an independent company until the acquisition is finalized. The press release says: "It is anticipated to close this summer, subject to Sun stockholder approval, certain regulatory approvals and customary closing conditions." Between now and then, our team will be full steam ahead on completing Wonderland version 0.5. Nothing about this news changes our commitment and dedication to the project.

Jordan and I are in DC this week attending the 3D TLC conference (we’ll blog about that later). It is personally difficult for me to be away from the office right now. Those of us on the team outside of California gathered in Wonderland yesterday to listen to our CTO talk to employees about the news. After the meeting concluded, the California-based team members, who attended the meeting in person, joined us in-world.  We were able to quickly gather all the information we have (not much at this point), and start thinking about what this change might mean for our project. After the gathering, I realized how much it meant to me to feel like I was really "with" my teammates during a time like this, regardless of my physical location. It was a great reminder of the importance of the technology that we’re building.

Over the coming months, we will do our best to share any information we get with the Wonderland community. In the short term, we do not anticipate any changes to the Wonderland roadmap or schedule.  In the longer term, we look forward to the opportunities for Wonderland in this new environment.

World Assembly in Wonderland v0.5

April 17, 2009

A long, long time ago, in a virtual world far, far away… building virtual worlds consisted of a hodge-podge of tools and technologies to import content from 3D modeling tools and arrange the content in-world. (Well, ok, it wasn’t that long ago — v0.4 only first appeared 7 months ago). 

At the heart of the system to assemble worlds was a collection of XML-formatted text files organized into a directory structure on disk, known as the Wonderland File System (WFS). WFS was a big improvement over what existed just a few months before the v0.3 release (i.e. creating and arranging the content by writing Java code), but it did initially require hand-writing XML files. Soon after (for v0.4) came the Wonderland World Builder (see Figure on right), a drag-and-drop interface to construct worlds from pre-fabricated components. On the back-end it would generate the WFS, and if you positioned your directories on disk properly, a "Reload World" command from the client updated the world with the changes you made in the World Builder.

The World Builder had a number of nice features: it presented a clean and easy-to-use catalog of components that you can easily drag onto a fixed grid. It made easy the layout and arrangement of seats within a room, or sections of wall or floor. But its nice features were also its limitations: it only supported static graphics, not the rich set of Cell types (e.g. telephones, whiteboards, etc) available in Wonderland and its fixed grid placed restrictive requirements on the size of each component in the catalogue. Also, the worlds that were built using the World Builder performed poorly, because potentially hundreds of individual Cells were created for essentially a simple scene.

Version 0.5: A Brave New World (Builder)

Let’s fast forward to the release we are currently working on (v0.5). Some architecture decisions we made for v0.5 made supporting the v0.4 World Builder very difficult (if not impossible). The most important of these decisions was the one to make Wonderland a dynamic virtual world, as described by Jon Kaplan’s recent blog, Persistence in Wonderland 0.5. No longer would the state of the world be entirely dictated by what exists within WFS: the live, dynamic state of the world is now stored by the internal database maintained by our Darkstar server middleware layer. It seems then, that any world building activities would have to act upon this dynamic state, and that these tools would need to be integrated into the Wonderland client.

First off, we decided to change our terminology a bit: the tools which I am going to describe we now call "world assembly" because they involve assembling and arranging existing components in-world, rather than building them from scratch. The Wonderland model is still one of an "open art path": content developers use external 3D modeling tools and export to a standard file format to import (in our case, COLLADA). World assembly consists of three components:

  1. Import and Create
  2. Arrange
  3. Edit

Let’s take a look at each three in turn.

Import and Create

The first step to assembling a world is importing and creating its contents. Generally, this consists of importing 3D models and creating instances of Cells. As I’m sure you’ve heard by now, in v0.5, we are supporting the COLLADA format for 3D models, and their import becomes far simpler too. (Back in v0.4, you needed to import the models, then grab the binary (.j3s.gz) file it produced, load it into a web server, and then create the WFS file to display the content in the world.)

Drag and Drop. In v0.5, this becomes as simple as dragging-and-dropping your model into the world. Your model file is parsed and automatically uploaded to our Webdav-based content repository hosted by the Wonderland server. Once uploaded, the system automatically creates a Cell that displays your model. (This feature will be available in dev5 in May; I have a prototype now).

Here’s a sled from the Google 3D Warehouse that I imported as a Google Earth (.kmz) file by dragging it from the desktop into the Wonderland client. You can also see the model appear under my directory on the Webdav web server: every users gets his/her own directory where models are automatically uploaded.

Sled from Google 3D Warehouse Content Browser

You can see that in addition to a user’s own space on the Webdav content repository, there is a "system" directory, and eventually there will be a directory in which you can create "groups" to store content for particular purposes.

In fact, it is more than just 3D models that you will be able to import via drag-and-drop. You will be able to drop any form of content into the world, and if there is a Cell type that supports that content type, it will automatically upload the content and launch a Cell to display it. We already do this today with the SVG Whiteboard module: you can drag-and-drop an SVG document from your desktop into the world and have it display in a Whiteboard. (I must say, pretty cool, huh?)

Cell Palette. This being Wonderland, there is naturally more than just 3D models that you can put into your world–there’s all those cool custom Cell types (that come packaged as modules). Introduced in v0.5 is the Cell Palette, a GUI that lets you create instances of Cell types in the world. One of the first steps in defining a custom Cell type is to define the factory class necessary to display your Cell in the Cell Palette.

The Cell Palette comes in two flavors, and I’m currently deciding which one will survive in the final release of v0.5 (perhaps both). The first one presents a text list of Cell names and a "preview" image of the Cell. You can select the Cell name in the list and click the Create button. The second presents the preview images for each Cell in a scrollable list and you can drag-and-drop the icons into the world to create them, which would appear as a window in the HUD (Heads-up Display).

Cell Palette Cell Palette (HUD Version)


Once you’ve imported your 3D models and created all of the Cells you want, the next step is to arrange them in-world. For that, we provide three visual "affordances" (also "maniplators" ) that let you (1) move, (2) rotate, and (3) scale your Cell. These affordances work for any Cell. Simply right-click on a Cell, a Context Menu appears, and select "Edit.." The Move affordance appears first and a frame with three toggle buttons and a slider. (This frame will eventually appear in the HUD).


You can display all three affordances at once (as controlled by the three toggle buttons on the HUD panel), as shown on the right here:

  1. Move. The Move affordances appear as thr
    ee double-ended arrows along each axis, colored individually (Red for X-Axis, Green for Y-Axis, Blue for Z-Axis). Simply drag each arrow to Move along each axis.
  2. Rotate. The Rotate affordances appear as three discs, colored for each axis about which they rotate (Red rotates about the X-Axis, Green rotates about the Y-Axis, and Blue rotates about the Z-Axis).
  3. Resize. The Resize affordance appears as a semi-transparent black sphere. To resize the Cell uniformly in each axis, click on the sphere and drag either away from the center of the Cell (to make the Cell bigger) or towards the center of the Cell (to make the Cell smaller). 



The final step to world assembly is editing the properties of a Cell. A Cell’s "properties" is not a well-defined set, it depends upon what properties the Cell exposes and also the properties of all of the Cell Components attached to the Cell. (We haven’t talked much about Cell Components yet, but they are a powerful part of the 0.5 architecture, and it is on our list to write a blog about). To bring up the Cell Properties dialog, simply right-click on a Cell, a Context Menu appears, and select "Properties…".

The left-hand columns contains a list of categories. There are several standard entries, such as "Basic" and "Position" (see picture below). If you click on any of the entries on the left, you see a property sheet for that category on the right. Here, you can edit the position of the Cell using text fields for more fine-grained control versus the visual affordances above.

See the ‘+’ and ‘-‘ signs at the bottom left? These aren’t implemented entirely yet, but this will let you dynamically add "capabilities" to a Cell. (You are really adding Cell Components to a Cell). For example, suppose you want to add security attributes to your Cell, so you’d first add the "security capability" to the Cell and configure its parameters. Other examples of capabilities to add to Cells will be scripting and audio capabilities. There’s an API (which is a work in progress) that will let you register your own Cell Components as capabilities and a visual GUI property sheet to edit its properties.

Cell Properties

What’s Next?

Great question! We’ve come a very long way since 0.4, but in future releases, I think the world assembly can be made even easier by integrating collision and physics. For example, someone should be able to insert a chair into the world and have it fall to the floor (using gravity in the physics engine) or be able to push a sofa right up against a wall (using collision detection). It would also be great to have guide lines and "snap to grid" for arrangement and the 2D birds-eye view that the 0.4 world builder provided.

Needless to say, there’s plenty of work to be done for world assembly, and I view the tools in 0.5 as just the beginning. 

Virtual World Tidbits from CHI 2009

April 11, 2009

Last week, I attended the CHI 2009 conference here in Boston. I have been involved with the computer-human interaction (CHI) community since the early 1980’s, and I host the monthly meetings of the local BostonCHI chapter, the first ever local CHI chapter. BostonCHI booth at CHI 2009 In addition to attending sessions, I was helping to staff the BostonCHI booth at the conference. Here I am at the booth with BostonCHI co-founder Kate Ehrlich and current chair, Doug Gibson.

There was not too much at the conference related to virtual worlds, but I did come across a few relevant tidbits to share with you.

Judy Olson on "Social Ergonomics"

One of my favorite social science researchers, Judy Olson from the University of California at Irvine, gave the keynote talk. Although the talk didn’t mention virtual worlds, the theme of the talk was highly relevant. Judy talked about a concept she calls "social ergonomics," which she describes as the "design of workplaces and systems that fit the natural social capabilities and inclinations of users." She discussed her large body of research on the impact of physical proximity on work outcomes. People who are "radically co-located" – working together in the same space – are almost twice as productive as those who are distant. This is due to awareness of others actions, gestures, and gaze, as well as the ability to have impromptu conversations. In real life, we judge how to behave towards others based on the distance they stand from us when talking coupled with other subtle social cues, such as eye gaze.

Her analysis of some of the newer video conference systems was fascinating. In these systems, there’s often a large monitor and people on both sides of the conversation are facing the monitor, staring directly at one another. In real life, this is a confrontational stance. Most people in natural conversations stand or sit at an angle to one another that can be as much as 90 degrees.

Another relevant research area she talked about had to do with eye gaze as a social cue. When someone is speaking, they tend to move their gaze away from the listeners until they are ready to end their turn, at which point they look directly at the next person and pause. The listeners, however, typically continue to look at the speaker.  Also, humans are apparently much more sensitive to right-left eye motion than to up-down motion.  We perceive that someone is looking away from us when they move their eyes only slightly right or left. They have to make much bigger movements up or down before we perceive them to be looking away. So if someone is looking at our forehead or chin, we will perceive that they are still looking at us.

This research has many implications for virtual worlds, especially if we want to achieve the next level of immersion and start to see some of the benefits of "radical co-location" in world.  I think the most significant takeaway is that we must find a way to map natural, unconscious head gestures and eye gaze onto our avatars to replicate the social cues from the real world.

Hair Matters

Nick Ducheneaut from PARC presented an interesting study about avatar personalization called "Body and mind: a study of avatar personalization in three virtual worlds". One of his main conclusions was quite surprising. It turns out that hair style and hair color are the most significant avatar features. They are the features that people customize most often and that users seem to care about the most. Nick had several theories for why this might be the case. He argued that in real life, hair is our most malleable physical characteristic so people are used to "configuring" and changing their real-life hair. Hair is also highly visible to users of the virtual world because people spend a lot of time in third-person mode staring at the back of their avatar’s head. Finally, hair can be seen from far away and is the feature that most helps to identify other avatars at a distance.

Another finding from the study is that most people create avatars that have many characteristics similar to themselves. The most common type of avatar is one that can be considered an "idealized self" – similar characteristics to the real person, but improved in terms of weight, height, athleticism, and so forth. Those people who were most successful in creating an idealized self were also the people who were most attached to their avatars.

Challenges for Virtual World Users

The paper "Acquiring a Professional ‘Second Life:’ Problems and Prospects for the Use of Virtual Worlds in Business" (PDF) was presented by a CMU student who was an intern at IBM. It covers 5 challenges for virtual world users in a business context. One of the big challenges is motivating business users to try virtual world technology in the first place. Some people felt that no technology would ever replace face-to-face interaction, others thought the virtual world was too much like a game, while others worried that their management would not approve.

As we all know, another challenge is getting the technology to work and then learning how to use it. The paper discusses the process of "becoming a competent virtual person," which I thought was a great turn of phrase.

The other major challenges involved learning to control an avatar, interacting with others, and finding compelling activities that take advantage of the virtual environment.

Sharing Memories

Some folks at Kodak Research did a study called "Capturing and sharing memories in a virtual world." Not too surprisingly, they found that people in the virtual world, particularly heavy users, liked to capture and share "photographic" memories – in this case screen shots – in much the same way as people in the real world. And like people in the real world, virtual world users have inadequate tools for organizing snapshots for easy viewing and retrieval.

Improvisation for Brainstorming

I’m convinced that if we can do it right, virtual worlds will prove to be an excellent medium for remote brainstorming sessions. That’s why I was interested in the talk "Using improvisation to enhance the effectiveness of brainstorming" by Elizabeth Gerber from Northwestern University. She talks about using theatrical techniques to get the creative juices flowing. Most of the techniques she suggested required face-to-face interaction, but there was one that I thought might work well in a virtual world brainstorming session. She had everyone in the audience take an every day object out of their pocket or bag. Then, working with a partner, you pass the object back and forth. The person holding the object has to think of a possible alternate use for it. For example, if the object is a pen, you might be able to use it as a "back scratcher," "hole puncher," "game spinner," or "drum stick." The idea is to come up with a
s many alternate uses as possible in a short amount of time. The exercise is intended to help people generate a lot of ideas quickly.

I was imagining doing this fun exercise in the virtual world by having a person drop an object or a photo of an object into the world. Each team could record their ideas on a shared whiteboard or just take turns speaking the ideas aloud.

User Experience in Open Source Projects

I attended a special interest group meeting on the topic of integrating user experience into open source software. It was interesting to hear about various efforts in this space and find out about some of the resources available. First, check out the OpenUsability web site. This site attempts to match people working on open source projects with usability professionals. They also sponsor the Season of Usability mentoring program for students.

I was particularly interested in a group called Aspiration which tries to help non-profit organizations improve their software. One way they do this is by running "usability sprints" – 3-day workshops where each participating non-profit identifies their most significant usability problem and a team of experts works with the developers to solve the problem, often writing code on the spot. I was thinking that a number of the non-profits working with Wonderland might like to apply for this program. They said they’re looking for new projects to support for a 2009 sprint.

Another potentially interesting group that might be relevant to Wonderland is Teaching Open Source. This group tries to match up people in open source projects who are willing to be mentors with students interested in working on a open source project.

3D User Interfaces

I attended part of a full-day course on 3D user interface design. The course notes will soon be available in the Resources section of the instructors’ 3D UI web site. This course covered a lot of materials, but much of it was focused on virtual reality, caves, Wii-motes, and the use of head-mounted displays.

World Building

And finally, I leave you with a pointer to a fun video that a former colleague of mine recommended for anyone interesting in world building:

Dev4 Avatar Saga

April 5, 2009

As you may know from my March 30th blog post (Dev4 Testing and the Sisine Whiteboard), the Sun Labs team has been preparing for Dev4 (the 4th developer release of Wonderland v0.5). After that March 30th test session, we did another test on Thursday that included a large amount of new avatar code. Things did not go well! Two people were never able to log in, performance was terrible, and those of us who did make it in each experienced at least several crashes.

I’m not quite sure how he did it, but over the course of only a single day, Paul pulled off an amazing turn-around. Friday’s test was a major improvement! There are certainly still many problems to work through including a show-stopper bug loading some avatars on Windows systems, but to everyone’s relief, the system is now usable again. Best of all, we are back to having multiple avatars, including female ones.

Avatars in Dev4

While we don’t yet have an avatar configuration tool, you can select a unique avatar by generating random combinations of hair and clothing until you find one that you like. Here’s how it works. From the Edit menu, select "Avatar Configuration." You’ll see this dialog:

Avatar configuration 1 Avatar configuration 2

Select the gender of the avatar you want and click "Randomize." Each time you click the button, you will see a different avatar. When you find one you like, or at least one you can live with, type in an Avatar Name. Now click "Add to My Avatars." As soon as you do this, the name of the avatar will appear in the "My Avatars" list and everyone else in world will see the new you. If you get confused about how you look to others, check the name in the "Current Avatar" field. This is always the name of the avatar that’s visible to others. Here I am in my new red shirt:

Red shirt avatar

We did have one particularly humorous moment during the avatar testing:

Nigel: "Oh no, I can’t wave any more."
Paul: "Did you change gender?"

As I said, there are still some problems to work out!

We also spent some time testing content importing. Here I dragged a screen shot into the world so that we could see how the new Wonderland icon will look on a Windows desktop and taskbar:

Windows screen shot in Wonderland Oversized phone model

Someone else brought a phone model into the world. Too big? No problem. We were able to right-click on the model, select edit, and scale the model down to the correct size and rotate it into position.

Note to Mac Laptop Users

I’ve struggled on my laptop when I don’t have a mouse connected because right-clicking is hard and there’s no mouse wheel. Jordan, my hero for the day, showed me that you can "right click" on your laptop’s track pad by placing two fingers on the pad and clicking the button. If you drag with two fingers, it’s like using a mouse wheel. Very handy! Thanks Jordan.

Fooling around in Project Wonderland using an iPod Touch

April 1, 2009

Research projects at Sun Labs are informally categorised in two dimensions: a project is categorised according to its relationship with Sun’s internal business units (such as server hardware, client software, developer tools); and according to its position in the development process (ranging from "Basic Research" to "Advanced Development" ).

We’ve recently completed a short project that fits into the category labelled "Preliminary Research Acquiring New Knowledge"–using Project Wonderland on an Apple iPod Touch. The video below captures the results of the project.

%d bloggers like this: