I have a question as to what it means for a knowledge-base to be consistent and complete. I've been looking into non-monotonic logic and different formalisms for it from the book "Knowledge Representation and Reasoning" by Hector Levesque and Ronald J. Brachman, but something is confusing me.
They say:
We say a KB exhibits consistent knowledge iff there is no sentence $P$ such that both $P$ and $\neg P$ are known. This is the same as requiring the KB to be satisfiable. We also say that a KB exhibits complete knowledge iff for every $P$ (within its vocabulary) $P$ or $\neq P$ is known
They then seem to suggest that by "known" they mean "entailed". They say
In general, of course, knowledge can be incomplete. For example, suppose KB consists of a single sentence ($P$ or $Q$). Then KB does not entail either $P$ or $\neg P$, and so exhibits incomplete knowledge.
But when dealing with sets of sentences, I usually see these terms as being defined w.r.t. derivability and not entailment.
So my question is, what exactly do these authors mean by "known" in the above quotes?
Edit: this post helped clarify things.