For this discussion I will think about only 1 anchor projects and forget about possibility to use 2+ with your euristic alghoritm to correct colors using 2+ anchors. 2+ will be theme of anothter topic
To clarify this explaination, let's separate the way picture are blended with the way pictures are color corrected.
OK. Good starting point to explain methods of blending
Will try to do this?
As I understand blending modes:
none: find borders between images in overlapped areas, clip images to borders, create big image from clipped fragments. Overlapped parts of images is not used
linear: similar to 'none', but mix parts of neigbours images in small zone along borders to make smooth result. All overlapped information outside of blending zone is not used
multiband: use all overlapping information to find better color values for all points in overlapping areas. This method is best to get HDRI, because of it find correct color from the set of overlapping images.
The blending is the fact of creating one picture using two pictures one on the top of another (like layer in photoshop). So as blending has nothing to do with color correction, let's ignore it. To illustrate this, go to option of the editor and put the "no blend" as default blender for the preview, you'll see what I'm talking about.
I understand result without the expirement: visible borders on normal panoramas (where set of images is used to extend visible area). But for HDRI-projects (where set of overlapped images is used to extend DR) result will be another: DR extension will removed. Is correct? Therefore I attach blending to color correction (for HDRI-projects blending is important part of pixels color selection)
B. Standard color correction :
APP change pixels color according to anchor type per picture. You can have one or more hard anchor (in yellow) which says that this picture should be affected by color equalization algorithm. This mode is to use when you don't have too much change in exposure (up to 2 or 3 IL).
You can use more than one hard anchors : APP will find a way to equalize the color correction between these constraints.
Note about float : The full process is done in float. After the first read of the input picture (jpeg, tiff, raw, etc), it is stored in float in memory and all calculation are done in float. Every format is used at maximum dynamic range possible (for example, raws gives at least 16bits). The level tools even if it shows a well known photoshop design, is in fact a float level (it's not only 0 to 255 values there). After every operations, stitching, projection, color correction, filter, level, there might be at the end of the process a reduction to 8bits or 16bits following the format you want as output. But during the full workflow, the maximum dynamic range is used.
For example I illustrate this with digits. To simplify, I write integer values, but remember about float operations.
We have 3 images, with different explosure. Anchor - medium, one darker, and one lighter.
darker colors 1..255 will mapped to 1..255
anchor colors 1..255 - to 4..1020
lighter 1..255 - to 16..4080
(multiple values is random, only for example)
we make all operations (levels, stich) and map result back to 8x3 bits, 4..1020->1..255, stripping extra brightness information. Or, may be, 0..4080->0..255? (using of DR without stripping). I think, first idea is correct
For 16 bits output we write all range (if possible) to 16 bits, or strip extra ninfomation
For HDR-files - write all information
C. HDRI mode :
You have big gaps in exposure, more than 2, 3-IL. EXIF is absolutely needed.
Why it is important? May be 'standard' mode not uses EXIF and find explosure difference automaticly by analyzing overlapped areas, but HDRI mode - by EXIF? This, plus using HDR slider to change displaed picture and mindatory using of filters to do levels or saving to 8/16 bits is only differences from 'snadrard'?
It works the same way as "Standard Mode". Anchors do influence the results.
How anchors can do influence, if we use only 1 anchor and ALWAYS use all DR?
In this mode, what is displayed on the preview can be really overburned. You have to use HDR level to set a window of display to show the true value of the HDR.
But in 'standard' mode you can display useful image for DR project. Why don't do this in HDRI mode?
Your screen has a 8bits dynamic range, but when using HDRI in APP you can easily achieve a 1,000,000 dynamic range (even from 3, 4 8bits files !). So, your screen need some help to be able to display such pictures. That's why tone mapper has to be used to solve this case, to transform a picture with values from 0 to 1,000,000 to a picture with value from 0 to 255 (High Dynamic Range Image to a Low Dynamic Range Image, HDRI to LDRI).
But similar problem exist also in 'standard' mode!