In the case of artificial neural networks, your question can be (partially) answered by looking at the definition of the operation that an artificial neuron performs. An artificial neuron is usually defined as a linear combination of its inputs, followed by the application of a non-linear activation function (e.g. the hyperbolic tangent or ReLU). More formally, a neuron $i$ in layer $l$ performs the following operation
\begin{align}
o_i^l = \sigma \left(\sum_{j=1}^N w_j o_j^{l-1} \right) \tag{1}\label{1},
\end{align}
where $o_j^{l-1}$ is the output from neuron $j$ in layer $l-1$ (the previous layer), $w_j$ the corresponding weight, $\sigma$ an activation function and $N$ the number of neurons from layer $l-1$ connected to neuron $i$ in layer $l$.
Let's assume that $\sigma$ is the ReLU, which is defined as follows
$$
\sigma(x)=\max(0, x)
$$
which means that all negative numbers become $0$ and all non-negative numbers become themselves.
In equation \ref{1}, if $w_j$ and $o_j^{l-1}$ have the same sign, then the product $w_j o_j^{l-1}$ is non-negative (positive or zero), else it is negative (or zero). Therefore, the sign of the output of neuron $j$ in layer $l-1$ alone does not fully determine the effect on $o_i^l$, but the sign of the $w_j$ is also required.
Let's suppose that the product $w_j o_j^{l-1}$ is negative, then, of course, this will negatively contribute to the sum in equation \ref{1}. In any case, even if the sum $\sum_{j=1}^N w_j o_j^{l-1}$ is negative, if $\sigma$ is the ReLU, no matter the magnitude of the negative number, $o_i^l$ will always be zero. However, if the activation function is hyperbolic tangent, the magnitude of a negative $\sum_{j=1}^N w_j o_j^{l-1}$ affects the magnitude of $o_i^l$. More precisely, the more negative the sum is, the closest $o_i^l$ is to $-1$.
To conclude, in general, the effect of the sign of an output of an artificial neuron on neighboring neurons depends on the activation function and the learned weights, which depend on the error the neural network is making (assuming the neural network is trained with gradient descent combined with back-propagation), which in turn depends on the training dataset, the loss function, the architecture of the neural network, etc.
Biological neurons and synapses are more complex than artificial ones. Nevertheless, biological synapses are usually classified as either excitatory or inhibitory, so they can have an excitatory or inhibitory effect on connected neurons.