This is an empirical question, essentially how many tasks do you need data for, to make a useful meta learning model (e.g. using MAML)? I'm looking for ranges based on personal experience or if anyone has done research on the topic and you know of references for the estimates that would be helpful as well.
For context I'm trying to work with about 5-7 tasks. I saw a person implement meta-learning with about this many in the paper Multi-MAML. But I've since seen example code in the learn2learn library which uses thousands of tasks...
P.S. I'm not sure if different parameterizations of a single task definition are still 'one task' (e.g. y=a*cos(x), where 'a' varies). Could that account for the discrepancy?