I did something similar here:http://www.flickr.com/photos/devinbrines/5530242763/
I don't know about .xml support for your panohead, but for my photo I used the Merlin head;
Autopano Giga must stitch based on .xml coordinates alone if it cannot find control points in the extremely out of focus areas; some people told me this was not possible--stitching solely from .xml coordinates--but they were wrong.
For my image, I stitched 180 photographs together--with 40% overlap when shooting, Autopano Giga was able to stitch the out of focus photographs (or in other words, nearly all of the photographs) completely from the .xml coordinates.
For it to work, the lens's nodal point needs to be on the axis of the head.
I don't know how to have Autopano Giga place images based on .xml coordinates without trying to find control points. In photographs such as this, it is better if Autopano does not try to find control points at all; when stitching out of focus images from .xml coordinates, it is better if Autopano does NOT find any coordinates. Its innate ability to place the images is, in my opinion, satisfactory.
You need to do what you can to ensure that Autopano does not find control points in the out of focus regions; if it attempts to find and like them, it is likely that they will be extremely inaccurate and will mess up the software and hardware's innate ability to stitch them.
So, in order to ensure that it does not find control points, this is what I recommend:
1. Set the parameters of the panorama as if the subject were in the shot.
2. Take the subject out of the shot so as to eliminate the possibility of Autopano Giga finding control points on your subject (as I assume your subject will be very close to the camera's minimum focusing distance).
3. Then focus at the minimum focusing distance of the lens, at which there are no details in focus, and no details that Autopano Giga will be able to connect.
4. Shoot the panorama using Papywizard.
5. Without touching the camera (!), put your subject back in the scene.
6. Use your remote to electronically move the lens to where you want to focus on the subject. Then focus on them.
7. Now, reshoot the panorama using the same grid as before. (I know there is a repeat mode in Papywizard.)
8. This will then reshoot all of the images at the same .xml coordinates as the first set of images; assuming you didn't move the camera, the positionings should remain true the second time around.
9. Go into Autopano Giga and use the import wizard to stitch together the out of focus panorama.
10. Once it has stitched flawlessly, save a new file.
11. In the new file, replace each photo with the matched photograph from the other set; in other words, replace the first photo of the out of focus panorama with the first photo of the in-focus panorama. Do this for all of the photos.
12. The final result should be what you desire. You might have to tweak optimization settings or turn final optimization off. I'm just not sure. If you take a trial-and-error approach in the settings with what the program does before the final render, you will discover what looks the best. Perhaps it's better if the panorama optimizes; perhaps it's better if it doesn't. It's up to you, really.
So yes, the method is complicated, and perhaps the whole thing would work without doing the replacement method, but I also find it extremely likely that the program would try to find control points between images that are a little too out of focus to stitch. My example was easier because the subject was in focus and the background wasn't. There was a clear difference. But I notice that in your examples, the difference between in-focus and out-of-focus is not distinct. I see this as a potential problem, and so I have recommended a workaround.
If anyone knows how I could stitch a panorama based completely on the native .xml positions without Autopano trying to find control points, I would appreciate them passing the knowledge onto me. You would benefit from this as well, as you would no longer need to replace all of the photos to get around the fact that Autopano is trying to stitch them from control points.
Some people say that the Merlin head is too inaccurate for this. In my tests, I've found the opposite to be true, but perhaps my test with the paper crane on stage is a lot more forgiving than stitching the human body--there might be less potential for error in stitching together the out of focus theater.
I do know that a more accurate panorama head is on the way--it is the Roundshot VR Drive Generation II. Not only is it more accurate, but it can support heavier lenses, and it seems to be a lot easier to place lenses on their nodal point. The downside: the device costs over $3,000.