8

Essentially, AI is created by human minds, so is the intelligence & creativity of algorithms properly an extension of human intelligence & creativity, rather than something independent?

I assume that intelligence does not necessarily require creativity, however, creativity can result from machine learning. (A simple example is AlphaGo discovering novel strategies.)

nbro
  • 39,006
  • 12
  • 98
  • 176
DukeZhou
  • 6,237
  • 5
  • 25
  • 53

8 Answers8

3

This is an old question, going back at least to 1950. It is one of the original objections to AI that Turing considers and attempts to refute in his seminal 1950 paper Computing Machinery and Intelligence.

Turing actually attributes this objection to Lady Lovelace, apparently quoted by another author. In Turing's paper, this is objection #6: Lady Lovelace's Objection, in section 6 of the paper. The objection is concisely stated as

The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform.

where "The Analytical Engine" was an early design for an all-mechanical general purpose computer.

Turing offers two replies to this objection. First, he reminds us that computer programs have bugs. That is, they often do things their creators did not intend. This is unsatisfying to many readers, but it does address the objection: programs may act in ways that are unrelated to our intelligence, and in doing so, might display unexpected intelligent behaviors. In this sense, their intelligence would not be an intentional product of human intelligence.

Turing's stronger objection comes from an anticipation that learning would eventually move to the center of AI research (keep in mind again, this is written in 1950, well before any reasonable learning algorithms had been proposed!). Turing uses the example of a robotic child in Section 7 of the paper (Learning Machines) to elaborate on his point. A child is created by its parents, but, endowed with the ability to learn, quickly begins to display behaviors its parents do not anticipate or intend. No one would suggest that a person's intelligence is "really just" the intelligence of their parents, even though their parents created them, and are partially responsible for that intelligence.

Likewise, Turing's proposed robotic child is created by a parent, but, endowed with learning, quickly begins to engage in behaviors the parent does not anticipate or intend. Therefore, machine intelligence need not be reduced to just human intelligence.

I think that if Turing were alive today, he would agree that we are now beginning to move into the era of learning machines he anticipated. Some of our programs now engage in intelligent behaviors that we do not anticipate or understand. For example, self-driving cars now kill or maim people, because they have learned behaviors their creators did not intend or anticipate, perhaps not unlike a reckless teenage driver.

John Doucette
  • 9,147
  • 1
  • 17
  • 52
  • 3
    Thanks for treating the historical dimension! The bugs, or errors, Turing references may have more weight in the context of random mutation in nature, where some increase fitness... – DukeZhou Nov 21 '19 at 01:41
  • This is definitely the best answer so far, especially because you cite one of the most influential papers in artificial intelligence. However, I need to agree with Ada Lovelace. Machines can only do what we originally program them to do, either directly or indirectly, with different levels of detailed commands. If I say to a machine, "Optimize! I don't care about the solution (numbers), just do what I said machine!", it will optimize. I command the machine to optimize, with respect to a metric that I (a human) chose. – nbro Nov 21 '19 at 02:31
  • Turing's argument relies on the idea that random or expected operations can lead to "intelligent behavior", but can intelligence be measured only by observing the behavior? I don't think so. The other argument is also flawed, in my opinion. The fact that you didn't predict something, it doesn't mean that you didn't indirectly cause it. – nbro Nov 21 '19 at 02:35
  • @nbro yeah, Turing won't have a lot to say to you if you don't buy his earlier argument that intelligence must be determined by observing behavior. I think that you're right, without that piece, Lovelace's objection is really strong. – John Doucette Nov 21 '19 at 12:18
1

I think no, it isn't. The reason I would say no, is that in order for it to be an extension of our intelligence & creativity, it must be limited by it. This, I believe, isn't the case however. We are capable of creating an AI that is smarter than ourselves (say at Go or Chess, without cheating and checking every possible move), and so it is not bound by our own intelligence.

I would liken it to creating a child. Just because you gave birth to Einstein, doesn't mean he's an extension of your intelligence. (This is of course pretty rudimentary, as it's very debatable as to whether it's reasonable to liken humans to AI).

Of course, this is a philosophical question, so it's hard to really answer yes or no.

Recessive
  • 1,346
  • 8
  • 21
  • 2
    Your analogy of the child is not a good one, in my opinion. When you develop an algorithm, you know exactly what it is doing. In principle, you can exactly compute what the computer computes. For example, when you're training a neural network with gradient descent and back-propagation, you know exactly what's happening, even though you do not necessarily memorize the numerical or symbolic representation of the neural network (the combination of all the weights) or the gradients, etc. In the case of a child, you can't really be certain of the outcome, a stochastic combination of genes. – nbro Nov 20 '19 at 02:58
  • 2
    The confusion arises because apparently the AI can do more things than us, but this is not true. In principle, we can do everything a computer can do. In fact, computers were invented exactly to mimic our methodological calculations. Computers simply do it a lot faster than us. AI is really limited by our limits, in terms of computable functions that we can compute (but not in terms of speed). We could in principle perform as well as AlphaGo, if we just had enough time and memory. – nbro Nov 20 '19 at 03:05
  • 1
    @nbro I agree that that AI is limited by our limits *given enough time and memory*. The reason for this, is I think a human has absolutely no limits with infinite time and memory. However, in reality this is never the case. So for all practical purposes, I would say an AI takes an intractable problem (which could be solved with infinite time) for a human, and makes it possible, making it unbounded by our limits for normal cases. – Recessive Nov 20 '19 at 03:18
1

No it isn't.

AI is essentially human intelligence with a combination of computing power to achieve tasks that a human alone cannot achieve in the time period that a programmed machine can.

To give an example. A human can identify a pattern in a data set of say 1000 records. However if that same logic needs applied to a data set of a billion records, a human would take ages to do it. But a machine can do that in seconds if the human gives the right instructions to the machine on how to do it.

Hope that helps.

Shayan Shafiq
  • 350
  • 1
  • 4
  • 12
  • 1
    It's true that computers are a lot more efficient at certain tasks, but giving a machine precise instructions to perform some task would not fall under the definition of AI and this doesn't say much about whether AI is truly intelligent or simply an extension of human intelligence. – NotThatGuy Nov 20 '19 at 10:20
1

I believe AI is, at least in certain ways, both an extension of human intelligence & creativity, and something independent as well. Note people didn't design airplanes to try to fly like birds do. Although planes use the same principles of aerodynamics that birds use to fly, we've adapted how those physics principles are applied to accommodate what we have to work with, i.e., metal, by having things like propellers, jet engines, fixed wings (initially, although later we also had helicopter rotor blades), etc.

In a similar fashion, we have adapted a few things we've learned about how human minds & intelligence work, with artificial neural networks being a prime example. However, even with just our fairly limited understanding, we've implemented neural networks differently, e.g., by which activation functions are used. Although we are learning more about how our brains work through neuroscience research, there's still so much we don't yet know. Nonetheless, I believe one of the biggest differences overall between our minds & AI is that our general intelligence comes from mostly massive parallel processing, to a much greater extent than even higher end GPUs can deliver, or even at least most supercomputers, while artificial intelligence generally depends instead a lot more on the massive speed of calculations available on our modern computer chips.

It's this learning, adapting & extending what we know about how we think & create, in combination with the mostly independent boost of using the advantages of computer chips (mostly their ability to do very fast computations), that has allowed AI to advance as far as it has so far. Nobody, including myself, can be sure of where & how the next major advances in AI will occur, but I believe it'll likely be a combination of learning & using what we learn about how we mentally operate, along with advances in computer related knowledge & technology (e.g., new algorithm techniques, more & better parallel processing, quantum computers with many simultaneous qubits operating, etc.).

John Omielan
  • 136
  • 2
  • 2
  • 7
1

I would say: no, it's not just an extension of human intelligence. Actually, I would argue there's nothing like human intelligence. At least it's not clearly distinguishable from intelligence in general.

If you say AI is just a set of instructions that are made by humans, you might be right. But what if this set of instructions contains instructions on how to change instructions? That would mean that the AI knows how to learn. What if you include instructions on how to learn to learn to learn to learn (...) to change instructions?

At what point would you say that this intelligence is still an extension of human intelligence? If you argue like this then you must also put "human intelligence" in a set altogether with every animal intelligence because it all originates from some sort of intelligence that is based on physical brain activity.

In fact, when a child is born, it is not more intelligent than most of the animal species. The only thing that enhances its intelligence from time to time (and do stuff like speaking or using its hands like tools) is the ability to learn.

I don't see why an AI hasn't got the potential to increase its intelligence to level where one would say: "This is not an extension of human intelligence anymore, this is something independent".

1

No, the way human minds think is in no way related to the way an AI thinks. Although you could say that AI is a much simpler form that represents how the brain processes information. For the human brain to think, sense, and act there are billions of connections is various cortex's of the brain that process information in different ways. If talking about brain information as electrical signals you could say that different cortex's of the brain have change in power of specific frequency bands of the brain signal which can be decoded as planning, preparation, thoughts, visual, movement, creativity, attentiveness and much more.

So, to answer your question AI could be considered as an extremely minute extension of human intelligence. It's like comparing our solar system to the Milky Way, although the comparison maybe a bit too large as we are slowly becoming able to understand the underlying processes and build fast processors mimicking brain processing and efficient power consuming hardware tech to run humongous neural nets. In the soon future your statement may hold true.

Roshan
  • 111
  • 2
1

Right, AI is an extension of human creativity and the implied limitation is that it inherits bias through the specific choice of which features to consider. Given a set of features it is then far more able at calculating which combination of features best helps explain the relationship being considered than is the human mind. Humans are too distracted to think to the depth that AI and machine learning can. But that extreme focus is not intelligence.

One of the issues that prevents the human mind from thinking at comparable depth is the need to massage the set of features that might apply; we are constantly reviewing features, adding in new and eliminating those that do not contribute. Creativity is openness to admitting other seemingly unrelated features and hoping for emergence, and managing to persist in being creative when emergence is delayed.

0

The answer in part seems to depend on what you mean by "human intelligence". If you mean behavior that would usually be regarded as requiring intelligence were a human to produce it, then various types machines can be intelligent.

Such "intelligent" machines presumably include player pianos. Playing the piano and producing a melody is widely regarded as requiring human intelligence when humans do it. Player pianos produce the same sort of behavior, but without a human touching a key. Hence (so the argument goes) player pianos are intelligent.

But if "intelligence" includes having the inner process of understanding, say understanding the meanings of symbols of written language, then at least according to philosopher John Searle, purely symbol manipulating devices such as digital computers could never be intelligent. This is because symbols in themselves don't contain or indicate their meanings, and all the computing machine gets and manipulates is symbols in themselves.

However, there does seem to be a sense in which the question "Is artificial intelligence really just human intelligence" is true of computers. This is when the behavior of the machine is caused by human intelligence. A human writes a program that defines, mandates, the behavior of the machine (just like a human designs the mechanism and paper roll of a player piano). This design takes human intelligence. The machine has no intrinsic, or innate, intelligence. It's just an automaton mindlessly following the causal sequence created by the intelligent human designer.

Now if computers are purely symbol-manipulating devices, and if Searle is right, AI is doomed, at lest as long as its development platform is the digital computer (and no other machine is available or seems on the horizon).

However, are computers purely symbol-manipulating devices? If not, there may be a way they can acquire meanings, or knowledge, and, for instance, learn languages. If computers can receive (including from digital sensors) and manipulate more than just symbols, they may be able to acquire the inner structures and execute the inner processes needed for human-like understanding. That is, they might be able to acquire knowledge by way of sensing the environment (as humans do). A human might write the program that facilitates acquisition of such knowledge, but what the knowledge is about would be derived from the sensed environment not from a human mind.

But here we're talking about "intelligence" defined over inner processes and structures, not or not just external behavior. If you define human intelligence as external behavior, as the Turing test does and as AI researchers often do, then music boxes with pirouetting figurines, player pianos, and programmed computers all have human-like intelligence, and artificial intelligence as it exists today is really just the same sort of thing as human intelligence.

Roddus
  • 161
  • 6