Given an image such as below, I want to blur it such that none of the original black pixels change in value. The blur should have a nice S curve fade-off to white.
If I apply a 15px gaussian blur to this image I get this:
It fades out nicely, but if I put a red square where the original black square was, you can see that actually the blur has exposed some parts of it:
Now I recently found out that a 15px gaussian blur means 1 std deviation is 15px, the actual full radius of affected pixels is closer to ¾√2π*σ. Technically infinite, but most programs limit it to that, from what I hear. So for a 15px blur, if I expand my box by 29px then gaussian blur by 15px, the extend of the blur ought to just barely reach back to my original box, leaving it undisturbed, right?
That looks pretty good, but if I put my red box back again we can see that there is still a difference:
The red doesn’t show through, but I can still see a hard outline.
Is there a better suited blur algorithm for this than gaussian? What I’m imagining is something like we find the normals for the edges of the box/shape and then draw an S curve along that normal. This would probably be easier if I had a vector image, but I have bitmaps, so I don’t know what the closest equivalent would be.