According to Russell and Norvig, a knowledge-based agent will only add a sentence to its knowledge base if it follows logically from what it previously knows, or directly observes. To follow logically essentially means that if the premises are true, then the conclusions are guaranteed to be true. So the agent will only add the sentence if it 100% sure the sentence is true.
Is this hyperskepticism in logic justified? Couldn't the agent be more efficient if it added a sentence if it were 99% sure it were true? It could potentially add a lot more true sentences, and only occasionally add false ones. There would need to be a mechanism for unlearning sentences, but as long as the vast majority of sentences added are true, why couldn't that be done?
I essentially asked this question here, and someone suggested that I post it here.