Hello!

I'm currently rendering a fractal that told me it would take 8 hours initially, quickly rose to 9 hours, then 10 hours. Now it has been running for almost 3 hours and it still predicts between 9 and 10 hours.

Here's my guess of what is happening: The fractal is basically a spiral with the highest iteration counts toward the center of the image, so I would expect the rendering to slow down up until about 50%, then speed up again. As far as I can tell, the rendering process just does columns from the top to the bottom and splits one of the remaining column if another one finishes sooner.

However, doing a few random pixels initially, say 0.1% or maybe 0.5%, should deliver a much better estimate of how long it will take to render the image, since this can vary wildly depending on the distribution of the iteration counts throughout the image. I don't know the costs of getting random integers, so this might produce some overhead, though I would guess it's not that bad compared to the actual calculation.

This isn't a real problem as it's somewhat easy to understand why it's happening, it just seems like a relatively simple fix would deal with it.

Hello!
I'm currently rendering a fractal that told me it would take 8 hours initially, quickly rose to 9 hours, then 10 hours. Now it has been running for almost 3 hours and it still predicts between 9 and 10 hours.
Here's my guess of what is happening: The fractal is basically a spiral with the highest iteration counts toward the center of the image, so I would expect the rendering to slow down up until about 50%, then speed up again. As far as I can tell, the rendering process just does columns from the top to the bottom and splits one of the remaining column if another one finishes sooner.
However, doing a few random pixels initially, say 0.1% or maybe 0.5%, should deliver a much better estimate of how long it will take to render the image, since this can vary wildly depending on the distribution of the iteration counts throughout the image. I don't know the costs of getting random integers, so this might produce some overhead, though I would guess it's not that bad compared to the actual calculation.
This isn't a real problem as it's somewhat easy to understand why it's happening, it just seems like a relatively simple fix would deal with it.