These are rather long and detailed questions, but if anybody can answer even one of them I'd be very grateful!
1. FSET resolution 3 is 4 m/Pix and resolution 4 is 4'm/Pix. What is the difference?
2. What is the resolution of the best Virtual Earth images? Since they can be captured at 0.25 m/Pix, I assume there are many places where it is at least as good as that?
3. In another post I argued that a level 15 tile is about 800 x 800 metres. Given that tiles of all levels are 2048 x 2048 pixels it follows that each level 15 pixel is about 40 cm x 40 cm. So level 15 is needed to fully take advantage of FSET resolution 0 (0.5 m), as shown in @Rodeo's chart in another thread. That being the case, my question is about the justification for capturing FSET images at a resolution of -1 (0.25 m). My feeling is that, although the final resolution will only be 0.4 m, the quality of information that goes into each pixel will still be better if the FSET resolution is 0.25 m. Does that make sense? [The downside, of course, is that it tales 4x as long to capture the info from FSET.]
4. When I look down (using FSET -1, level 15) I can see sheep in the fields. If the resolution is 40 cm, that means each sheep is an elongated blob of about 1 x 3 pixels - 0.4 x 1.2 metres being my estimated size for a sheep! So far, so good. What puzzles me, however, is that I have the impression that I can see detail smaller than a sheep - i.e. the detail looks smaller than the width of a sheep. Examples are: the white lines on roads (about 15 cm wide?), power cables between pylons (about 5 cm wide?), etc. If there is a 15 cm white line on blackish asphalt I would expect a greyish pixel to be produced with a width of 40 cm. My brain then interprets this as (a) white and (b) narrower than a sheep because it knows that, in reality, a road marking is white and narrower than a sheep. Is this what's happening - or is there something I don't understand about images?
5. Still on the subject of being tricked by the brain ..... Using FSET -1 and level 15 low buildings and trees look 3D to me above about 100 metres. I can't decide whether this effect is enhanced by shadows, or whether it would be the case even without shadows in the FSET images. (Unfortunately this doesn't work for taller structures above about 2 storeys, which look flattened from any altitude.) So it seems to me that I only need cultivation when I'm below 100 metres or above a town where there are taller buildings (or if I need night lighting effects etc - or if the native FSET quality of the image isn't good enough to produce the 3D effect.) Is this how other people see it, or is it just me?
6. The Oculus Rift has a resolution of 1080 x 1200 and a field of view of 110 degrees. This means that the angular resolution is about 0.1 degrees. (Compare this with the angular resolution of the unaided eye, which is about 0.02 degrees or 0.0003 radians. So the Rift would need to improve its resolution by a factor of 5 in order to be equivalent to normal vision.) 0.1 degrees corresponds to a 40 cm (level 15) pixel viewed from a height of about 230 metres (800 feet). So, if you're using the Rift, it's worth having level 15 tiles at any altitude below 230 metres. Also it means that you wouldn't really want to see any lower level tiles (14, 13, etc) below about 230 metres. Is there a danger that these lower levels will kick in when you don't want them (i.e. below 230 metres) thus lowering the optimum resolution? (Given that I only fly low and slow I'm trying to optimise my scenery for low altitude.)