In a two player zero-sum game (if I win, you lose and vice-versa), then you can have a simple and efficient solution learning from self-play.
How should an opponent be implemented to get good and fast improvements?
You don't need to think in terms of agent vs opponent, instead think in terms of coding both the players' goals into a single Q function. Score +1 if player A wins, -1 if player B wins, and zero for a draw. It is then player A's goal to maximise the score and player B's goal to minimise the score.
You can then implement and learn both player strategies in the same self-play learning session and same Q function, using minimax. In practice that means where in Q learning you generally pick the maximising action on next state to bootstrap Q values, in a minimax variant, you pick maximising or minimising action depending on whose turn it is. Otherwise the Q learning algorithm is the same as normal. I have implemented this, but not for DQN, just for tabular Q-learning - feel free to learn from, copy and/or re-use any part of that code.
Is it reasonable to play against a random player, a perfect player or should the opponent be a DQN player as well?
The Q learner will learn to optimise against whichever player you make it play against. Against a random player, it will not necessarily learn to play well, just well enough to defeat the randomness. It may even make deliberate mistakes - such as not blocking a winning line - knowing it has a better chance to win due to random opponent.
Against a perfect player is possible with tic tac toe (because you can construct such a player), although there might be gaps in the trained Q values - game states never seen - which mean that an imperfect opponent could actually defeat the trained agent! In practice this does not scale to more complex unsolved games, because no perfect players exist.
Another DQN player should work fine. You would end up with two agents, each specialising in playing one player's turns. This is less efficient than a single minimax-based player, but no expected problems. It may be a preferred choice for some games, especially if they are not zero-sum.