It's possible that you've used up all available inodes, meaning that whilst you do have space on the disk, you can't actually write to it. Try running this:
df -i
Where 'df' is 'disk filesystem', -i is 'inodes'. The output will look something like this:
chris@loki:~$ df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 500788 487 500301 1% /dev
tmpfs 505825 638 505187 1% /run
/dev/sda1 1835008 374975 1460033 21% /
tmpfs 505825 27 505798 1% /dev/shm
tmpfs 505825 5 505820 1% /run/lock
tmpfs 505825 16 505809 1% /sys/fs/cgroup
tmpfs 505825 24 505801 1% /run/user/1000
However, if 'IUse%' is 99-100% on your drive, the then that'd be the issue.
There's a bunch of reasons that all the inodes would be accounted for; in my experience as a server manager, for example, this would typically be caused by log files not being cleared out regularly enough (we're talking years) or verbose error logs being generated in to new text files over and over again. Could be anything in your case though, but more than likely caused by thousands of files being stored in certain directories for bad reasons.
The best way I found to find the source of such a problem is to run this from the root directory of the machine:
find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -rn | head
I'm sure there are better and more efficient ways though! The above isn't quite perfect.