1

Suppose there are some objects with features, and the target is parametric density estimation. Density estimation is model-based. Parameters are obtained by maximizing log-likelihood.

$LL = \sum_{i \in I_1} \log \left( \sum_{j \in K_i} \theta_j \right) + \sum_{i \in I_2} \log (1 - \sum_{j \in L_i} \theta_i)$

Assume that parameters $\theta_j$ are probabilities, i.e. $0 < \theta_j < 1$, and that $\sum_{j\in L_i} \theta_i < 1$. From practical perspective, it seems natural to make parameters $\theta_j$ themselves functions of features, i.e. $\theta_j = F(x_j^1, \ldots, x_j^m)$.

Is there any known standard method or heuristic to optimize such objective with a decision tree, i.e. we assume that our function $F$ is a decision tree?

Any related results are welcome.

Saurav Maheshkar
  • 756
  • 1
  • 7
  • 20
nekrald
  • 11
  • 2

0 Answers0