A little background... I’ve been on-and-off learning about data science for around a year or so, however, I started thinking about artificial intelligence a few years ago. I have a cursory understandings of some common concepts but still not much depth. When I first learned about deep learning, my automatic response was “that’s not how our minds do it.” Deep learning is obviously an important topic, but I’m trying to think outside the black box.
I think of deep learning as being “outside-in” in that a model has to rely on examples to understand (for lack of a better term) that some dataset is significant. However, our minds seem to know when something is significant in the absence of any prior knowledge of the thing (i.e., “inside-out”).
Here’s a thing:
I googled “IKEA hardware” to find that. The point is that you probably don’t know what this is or have any existing mental relationship between the image and anything else, but you can see that it’s something (or two somethings). I realize there is unsupervised learning, image segmentation, etc., which deal with finding order in unlabeled data, but I think this example illustrates the difference between the way we tend to think about machine learning/AI and how our minds actually work.
More examples:
1)
2)
3)
Let’s say that #1 is a stock chart. If I were viewing the chart and trying to detect a pattern, I might mentally simplify the chart down to #2. That is, the chart can be simplified into a horizontal segment and a rising segment.
For #3, let’s say this represents log(x). Even though it’s not a straight line, someone with no real math background could describe it as an upward slope that it decreasing as the line gets higher. That is, the line can still be reduced to a small number of simple ideas.
I think this simplification is the key to the gap between how our minds work and what currently exists in AI. I’m aware of Fourier transforms, polynomial regression, etc., but I think there’s a more general process of finding order in sensory data. Once we identify something orderly (i.e., something that can’t reasonably be random noise), we label it as a thing and then our mental network establishes relationships between it and other things, higher order concepts, etc.
I’ve been trying to think about how to use decision trees to find pockets of order in data (to no avail yet - I haven’t figured out to apply it to all of the scenarios above), but I’m wondering if there are any other techniques or schools of thought that align with the general theory.