I am trying to find a way to constraint on the output from a neural network. This output is supposed to be the signed distance function to multiple non-overlapping objects. In particular, I hope the output (multi-channel) to have at most one channel being negative. I wonder if there is any activation function or regularizing loss function that helps to enforce this condition.
I tried to find an activation function that impose such at-most-one-negative-channel constraint. However, existing activation functions seems to be only capable of constraining on the sum of all output channels. I though of activation functions that shifts every channel uniformly, but this would render an output with always one negative channel instead of the at most one channel.
I also thought about regularizations such as summing the following function f on all channels, where f(x)=-x for x<0 and f(x) = 0 for x>=0. However, this regularization equally penalize all channels from being negative.
Salamendrine is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.