Maybe I'm just old-fashioned (primitive?), but my understanding of the term "same sign", at least as it relates to programming, has always been: when the sign bit of two values is the same, those values have "the same sign". In that interpretation, zero is always included among the set of "non-negative" values: it is the same sign as "1", and not the same sign as "-1".