The problem statement is:
"Given a 2D grid with multiple source cells, we need to compute for every cell (x, y) the minimum cost to any source, where the cost between two cells is defined as the squared Euclidean distance: cost((x,y),(u,v)) = (x-u)^2 + (y-v)^2."
I am interested in understanding why Dijkstra fails for this task if we start propagation from all source nodes, while keeping only the best_source[x][y] (nearest node to cell{x , y}) for each visited cell.
I understand that the issue is related to the non-linearity of the cost function, but I don’t have a strong intuition for why this approach is incorrect.
What are some good ways to think about or model this behavior?
Also, if there are known problems (e.g., on Codeforces or AtCoder) that involve similar ideas, I would appreciate references.



