2

I have a mix of two deep models, as follows:

if model A is YES --pass to B--> if model B is YES--> result = YES
if model A is NO ---> result = NO

So basically model B validates if A is saying YES. My models are actually the same, but trained on two different feature sets of same inputs.

What is this mix called in machine learning terminology? I just call them master/slave architecture, or primary/secondary model.

Tina J
  • 973
  • 6
  • 13

1 Answers1

1

Not in terms of models, but there is a terminology called 'Hierarchical learning', wherein if your model has a task to classify disease, then, If it detects a presence of a disease (disease/ no disease), then it proceeds to further classify a disease(class A/B/C/...). Else it does not proceed. This technique of hierarchical learning is very common amongst supervised learning tasks.

Now according to your question, you have two models and I assume that they have different tasks and provide a binary outcome(yes/no). Here, you can call it as 'Multitask learning', where the output of task1 is given to task2 for processing. If task1 detect the presence of disease, then task2 classifies disease into various classes / or segment it / localize it etc.

  • 1
    Thanks. My models are actually the same, but trained on two different feature sets of inputs. So how is this one called? – Tina J May 01 '20 at 18:37
  • I am not aware of a specific terminology regarding this. But, if you use the same models just trained on different sets of inputs and you are using both models to infer your final output then that's just plain model assembling. An eg for this is a face recognition model where you train the **first model** on **local features** such as eye, nose, mouth, etc, and **second model** on **global features** such as overall face texture, size, hair, etc. In the end you would fusion their outputs and average them for prediction. Model averaging is common in ensembling tasks. – Aniket Velhankar May 03 '20 at 17:07