Based on our conversation in the comment section, what Mucida wants is a reformulation of the input, e.g. if the input is:
"A company director has a pecuniary duty"
the output should be:
A company director has a pecuniary duty or "A CEO has monetary responsabilities".
By default, GPT2 returns what could be the next sentence in a longer paragraph, e.g.:
"It's not to make money, but to serve customers".
When you want large language models like GPT2 to give you a certain type of answers, what usually works well is to give it a few examples of what you want as an input.
For example, you'd give as input a few pairs of reformulations:
"What's a reformulation of "A company director has a pecuniary duty"? It's "A company director has a pecuniary duty". What's a reformulation of "Stackoverflow is a great place to ask questions"? It's "Stackoverflow is where you get the best answers to your questions". What's a reformulation of "Elon musk is the richest person on earth?" It's"
Then, the ouput of GPT2 should complete what comes after "It's" in the same style.
You should check what's the context length of the GPT2 model you're using is. Otherwise, if you feed an input that's longer than the context length, it's not going to be taken into account in its entirety by the model.