Wednesday, August 1, 2007

Link

Immersive Development might one day change the way we create software, but I am sure you will still be able to use any of these popular and widely-used development methods...

Thursday, July 19, 2007

Comments & Annotations

Code should be well commented. While methods and variables can have names that speak for themselves, most aspects of a system need to be written down somewhere else for future programmers to understand it.

This is -uhm- problematic these days, and with a powerful development environment it will be worse. Immersive Development should enable us to create systems far more complex than those we build today. As I wrote before: our sense of orientation in a 3D environment is very powerful. Commenting will therefore be more important.

Today, I can comment code and "pin" editors. That way, I can always have the relevant information in focus, and I can leave myself notes in the comments I write.

In a 3D environment, I would probably stick yellow notes with comments onto the objects. And I would use a telescope tool with a shortlist so I can quickly find the objects I am currently working on.

But there could be lots of other ways:

Remember those key-ring beepers? Stick a virtual beeper onto an object and it will shine or flash when you "call" it.

Create a shortcut to an object and stick it behind your virtual ear, onto your virtual forehead, or onto the back of your virtual hand.

Stick a huge flag into the object so you can see it even when you "are far away". Or paint the object with some neon colour.

Or what about this: you are currently working on two aspects of the system: one is the caching system you created to limit database access. The other is some complex functionality triggered by a button on the UI. Zoom out and grab the whole system. Turn it around slightly. Now pull all objects (classes, interfaces, libraries) implementing the cache towards you so you can see them. Grab the whole system again and turn it around some more. Now grab the objects involved in the calculation and pull them towards you. You can now just turn the whole system around slightly to switch your point of view. Had an idea for the calculation? Turn the system to the right and work on it. Then turn it back and continue on the cache.

And I'm sure there are more ways then I could even count...

Tuesday, July 17, 2007

Basic Navigation in Space: Use Your Hands!

Immersive Development should make software development easier because it enables the developer to use all her senses, specifically her sense of orientation as well as the full range of movements human hands can perform.

This is far more powerful than using ten fingers on a keyboard! In fact, it is probably so powerful that it will be difficult to predict how to best interact with the space before someone implements it.

Some basic gestures seem to be straight-forward, though.

Grabbing an object: selects the object. Once you've grabbed it, you hold it in your hand and you can then manipulate it. An example would be to grab a button template from the toolbox and then placing it on the canvas that represents the UI.

Pulling an object towards you: will zoom in. While from "far away" you can only see the object itself, you will be able to see public methods when you pull it towards you to zoom in on it. When it gets closer, you will eventually be able to see all methods. The actual code inside the method could be represented by a scroll. Pull it down and see how this particular method has been implemented. Write on the scroll to change the code.

Pushing an object away: will put it back into place. When you've finished looking at an object, you push it away, and it will snap back into place, just where you found it before.

Pushing an object around: will change it's location in virtual space. You might do this to better reflect the architecture of your system, or just because you want to organize your VR space.

Draw rectangles and boxes: might act a bit like the selection rectangle on current 2D systems. You might draw a box around a group of objects in order to select them all.

Turning an object around: will enable you to see it from a different angle. As with the UI, each side of the object might show you different aspects of it - methods, properties, interfaces, related classes, and so on.

Throwing an object over your shoulder: deletes the object (or puts it into a virtual bin).

Monday, July 16, 2007

Collaboration

Big projects are not done by a single developer, obviously.

So how do two developers work together using Immersive Development?

First, they both look at their individual version of the code, just like they do these days, using cvs, subversion or a similar tool.

But when they check out code into their VR space, they can arrange it as they wish. While one developer might prefer the UI on his left, another might want to see it below his point of view.

A project administrator can draw a box and give an individual programmer access to just that box, nothing else. For that programmer, there would be incoming and outgoing pipes in the walls, and he would implement the code in between. An admin might also give read-only access to all code, but write access to only a box. That way, a developer would be able to see where exactly in the system his code sits.

Now because all of this is in 3D, I can rearrange all the elements until I understand the structure of the system. I can pull things apart, zoom in on them, or hover above. I can turn the system around and look at it from a different angle. I can "pin" some of the objects, maybe by adding them to my shortlist.

The shortlist might be similar to a telescope with a little list of items attached to it. Click on an item and the telescope swings around and points at the object.

If I want to discuss parts of the code with a fellow developer, I pass him a shortlist and he can use his telescope to find the objects and inspect them.

The Architecture

I want to write code that implements everything I need to make the UI function. Some of that code will run on the client machine, some might run on a server. There might even be multiple servers working together to respond to the users' actions.

In a normal project, you would probably decide early on whether you will have a thin client or a full-blown application, and whether you will use an application server and a database in the background. You might decide to use a cluster of web app servers. You will decide which parts of the system will do which specific part of the work, and how the parts will communicate.

In Immersive Development, I would do all that in space.

Quite literally "behind" the UI canvas, I would at some distance create a large box that could reflect a virtual machine (e.g. a JVM). From my toolbox, I would grab a database object and place it somewhere behind that box. I now have a three-tiered architecture in place, and I can decide where to put other parts of the system by placing them in the big box or next to the UI or the database.

I need the code in each box to be able to communicate with the others, so I grab one of the CORBA elements from my toolbox and connect the UI box with the box in the middle. I can now, with my index fingers, connect the "Save" button's "OnClick" stub to some method in the business logic VM, the box in the middle.

Similarly, I can grab a JDBC driver element from the toolbox and use it to connect the business logic VM to the database. The JDBC driver element has some settings written on it, so I zoom in on them and adjust them.

The IDE needs to generate the code that is needed for this to work. This is obviously the repsonsibility of the toolbox element vendors. Some code generators would probably ship with the IDE itself.