While the rendering does mixing, I see my CPU not being used to the max. Looking in windows source control I see no obvious reason for it. Is there a reason why this happens and better, how can I solve this. When all bottlenecks are gone the CPU should go to 100% right?
When I say no obvious reason, I mean that the disk I/O is remarkable low. If the pagefile would be swapping heavily I would think I would see this here. But as it looks it does not have high traffic E: is my SSD where also the AGP temp-directory is.
Hi Hans For max performance: Try to stop all other unneeded tasks like onenote.exe etc. Stop Antivirus tools (!), they sometimes check all your temp files. Activate all buffers/cache on HD and if possible on SSD. Check the temps of your hardware, if there is no overheating of a component, so it will be regulated down. Check the settings for multicore in APG Have you XMP memory ?
Thanks for all the tips, I will check all these points. Meanwhile I upgraded from 6 GB to 16 GB, which means from 2,5 GB free memory to 11 GB free memory to use for APG. That is very comfortable, even when not doing gigapanos. Still, your list is a good one!!!
Hi Hans Have you only upgraded the mem or the motherboard too ? In some case, using the max RAM amount will degrade the RAM running only in single than in dual channel mode. This means only 50% of the RAM speed and could be a new bottleneck ;-)
I upgraded after the original problem occured. It took me from 5.5 hours to 24 minutes on a certain test-set. i now am testing to see if the problem described this occurs.
Yes I know, I used to have 3 bars of 2 GB because that was supposed to allow for 3 channels of communication. Having 8 GB would not have made much different anyhow as the OS I was using at the time did not support it. Now I have 4 bars of 4 GB and it could well be this is slower. But the advantage of having 16 GB instead of 6 GB, or better said 11 GB free instead of 2.5 is problably bigger then the lost om communication speed to the memory.
I think the fact that I went from 5.5 hours to 24 minutes shows it is a good trade-off :-)
A good multithreading algorithm is hard to achieve. We are working on this topic since 2 years already with Intel as partner. An engineer there validates the changes we do in the code to see if the idea / spirit of each algorithm is going to scale well or not. This is complicated. Many of the stages are now multithreaded, but some of them are not at all. Moreover, we have to take care of the hyperthreading fact too. It's a core, but virtual core and doesn't have the same abilities as a real physical core. Be sure, that on the MT topic, we are really good even if the task manager isn't showing that well ( BTW : it's the worth MT benchmark, but unfortunately, it's the only visible ).
So, in you case, blending, we only use real core and not the virtual core because it would have slow down the process. Here's the IO dominates and a virtual core cannot do IO ( in fact it can, be it would lock the second core sharing the same die ). So we just use real cores.