Survival, Imagining, Moral Reasoning
The thing that comes to mind is a new-born, when you said "the stupidest human", and it already has some basic “survival instincts”. It will avoid pain, consume food, and quickly learn to distinguish "safe" and "dangerous" conditions and people.
We have computer programs that can learn chess and calculate the optimal move in a split second, but isn't playing chess is a bit pointless. Merely being able to play a board game is of little value from a survival perspective, industrial perspective, or economic perspective.
There are programs that can do things that are very helpful for the modern world, but as far as I know, they just don't have survival instincts. A self-learning robot; left in a forest with all the tools it needs to generate power for, build duplicates of, maintain and defend its self; probably wouldn't be able to learn how to do so in time to ensure its survival. Our current self learning programs would need to be able to identify when it has succeeded or failed to improve its survival odds. A child of two may learn fast enough to survive if the conditions are not too severe and non-toxic food and some form of shelter is nearby.
A financially poor, marginally educated person with lower than average aptitude working at a farm or factory might not be able to play chess well, but they would definitely be able to tell if someone is murdering someone else, and know to flee and seek the authorities. A robot that can play chess would not.
Furthermore, humans can continue to learn when separated from the problem by thinking about the problem. The ability to construct arbitrary models and run thought experiments is currently unique to humans.
That said, I do hope that we will soon have programs that well replicate the human mind, and demonstrate some of the aspects of what we call consciousness.