Immersive Development should make software development easier because it enables the developer to use all her senses, specifically her sense of orientation as well as the full range of movements human hands can perform.
This is far more powerful than using ten fingers on a keyboard! In fact, it is probably so powerful that it will be difficult to predict how to best interact with the space before someone implements it.
Some basic gestures seem to be straight-forward, though.
Grabbing an object: selects the object. Once you've grabbed it, you hold it in your hand and you can then manipulate it. An example would be to grab a button template from the toolbox and then placing it on the canvas that represents the UI.
Pulling an object towards you: will zoom in. While from "far away" you can only see the object itself, you will be able to see public methods when you pull it towards you to zoom in on it. When it gets closer, you will eventually be able to see all methods. The actual code inside the method could be represented by a scroll. Pull it down and see how this particular method has been implemented. Write on the scroll to change the code.
Pushing an object away: will put it back into place. When you've finished looking at an object, you push it away, and it will snap back into place, just where you found it before.
Pushing an object around: will change it's location in virtual space. You might do this to better reflect the architecture of your system, or just because you want to organize your VR space.
Draw rectangles and boxes: might act a bit like the selection rectangle on current 2D systems. You might draw a box around a group of objects in order to select them all.
Turning an object around: will enable you to see it from a different angle. As with the UI, each side of the object might show you different aspects of it - methods, properties, interfaces, related classes, and so on.
Throwing an object over your shoulder: deletes the object (or puts it into a virtual bin).