This is not necessarily cause for concern and it doesn't indicate reducing or tweaking swappiness.
The answer by vandium is pretty comprehensive but I'd also like to mention a couple of things.
It can help to think of lower swappiness values as riskier, and higher values are more conservative. That is, lower swappiness values will wait until the memory situation is more dire and strained with little left for cache before it swaps while a higher swappiness will swap when the available memory and cache size is only slightly threatened. Many people think of reducing swappiness as a blanket way to reduce swapping but it's so often the case that you will still swap, but at a later time when your system is more desperate for memory. The real blanket solution to reduce swapping is adding more RAM.
Secondly, the behaviour for swapped data is for it to remain in swap until it's requested again, instead of being swapped out as soon as the memory demand decreases again. Some people see this and think it undesirable; they think swap should be avoided, but the justification for this is that the performance penalty for swapping out is around the same as swapping in, so the system avoids unnecessary swapping out. If you do end up needing something that's in swap, it's a swap out that would have happened anyway at some stage but by delaying it until it's requested it leaves the available memory and cache space nice and high (for good system performance, and for better response in case memory demand quickly goes back to its previous high again, such as if the high memory event was on a schedule) and avoids unnecessary I/O.