Would AI be able to self-examine objectively and determine if it is capable of doing the task?
A possible approach might be the one suggested and studied by J.Pitrat (one of the earliest AI researcher in France, his PhD on AI was published in the early 1960s and he is now a retired scientist). Read his Bootstrapping Artificial Intelligence blog and his Artificial Beings: the conscience of a conscious machine book.
(I'm not able to summarize his ideas in a few words, even if I do know J.Pitrat -and even meet him once in a while- ; grossly speaking, he has a strong meta-knowledge approach combined with reflexive programming techniques. He is working -alone- since more than 30 years on his CAIA system, which is very difficult to understand, because even while he does publish his system as a free software pragram, CAIA is not user friendly, with a poorly documented common line user interface; while I am enthusiastic about his work, I am unable to explore his system.)
But defining what "conscience" or "self-awareness" could precisely mean for some artificial intelligence system is a hard problem by itself. AFAIU, even for human intelligence, we don't exactly know what that really means and how does that exactly work. IMHO, there is no consensus on some definition of "conscience", "self-awareness", "self-examination" (even when applied to humans).
But whatever approach is used, giving any kind of constructive answer to your question requires a lot of pages. J.Pitrat's books & blogs are a better attempt than what anyone could answer here. So your question is IMHO too broad.