After years of learning, I still can't understand what is considered to be an AI. What are the requirements for an algorithm to constitute Artificial Intelligence? Can you provide pseudocode examples of what constitutes an AI?
-
1Here's a very similar question [What are the minimum requirements to call something AI?](https://ai.stackexchange.com/q/1507/2444). I won't close it as a duplicate because here you are _also_ asking for pseudocode examples. – nbro Mar 04 '20 at 17:41
3 Answers
Philosophically, my own research has led me to understand AI as any artifact that makes a decision. This is because the etymology of "intelligence" strongly implies "selecting between alternatives", and these meanings are baked in all the way back to the proto-Indo-European.
(Degree of intelligence, or "strength" is merely a measure of utility, typically versus other decision making mechanisms, or, "fitness in an environment", where an environment is any action space.)
Therefore, the most basic form of automated (artificial) intelligence is:
if [some condition]
then [some action]
It is worth noting that narrow AI which matches or exceeds human capability, in the popular sense, manifested only recently when we had sufficient processing and memory to derive sufficient utility from statistical decision making algorithms. But Nimatron constitutes perhaps the first functional strong-narrow AI in a modern computing context, and the first automated intelligence are simple traps and snares, which have been with us almost as long as we've used tools.
I will leave it to others to break down all the various forms of modern AI.

- 6,237
- 5
- 25
- 53
-
The expression strong-narrow AI is not a standard one and may be misleading or confusing. You're talking about narrow AIs that are really good, such as AlphaGo, but maybe you could use another adjective other than "strong". Btw, I think that "strong AI" is really a bad expression. AGI is definitely a better expression. – nbro Mar 05 '20 at 23:40
-
@nbro I noticed that certain scholars started using "strong narrow" after the advent of AlphaGo. I think you're technically correct that "strong" is not strictly required, since it references degree of intelligence, as opposed to the nature of intelligence. Here though, I am using it to connote intelligence that matches or exceeds human intelligence. Perhaps I need to define that? (PS—I agree fully that the traditional use of "strong AI" is archaic, and that AGI should be used, because it is more explicit—general intelligence is only implied in the former term.) – DukeZhou Mar 06 '20 at 00:21
AI is not a simple term. There are different types, ranging from the most simplistic rule-based AI to black-box AI's so complicated it's unreasonable for a human to understand exactly what they're doing.
There's no pseudocode that if used in a program automatically constitutes it as an AI. It's not that black and white. But I can give examples:
Here's a rule-based chess AI that forfeits if it's too far behind, and plays aggressively if it's far enough ahead.
if player.score - my.score > 10:
forfeit
elif my.score - player.score > 10:
agressive = True
for each piece of my.pieces:
for each square of board.squares:
if noThreats(square) and agressive is True:
move(piece, square)
return
This is considered an "AI" because it feigns intelligence - appearing to have a true understanding of chess while simply following a set of rules, making it an "Artificial" Intelligence.
Here's another more complicated AI:
decisionNet = NeuralNetwork(64 inputs, 2 outputs)
choice = decisionNet(board.squares) // Returns a chess square with one of my pieces and desitnation
move(choice)
This uses a neural network to make the decision, which could have been trained on a bunch of example games or against itself. Due to this "training phase", humans can't understand precisely what the network is doing without extensive effort, so it's an even gives an even more convincing understanding of chess. But if we want, we could still understand the nuances of this network, and show it doesn't possess an intelligence, it again only feigns it.
I should mention that virtually any code that has an if statement can be considered AI. The examples I provided are just easier to pass off as understanding a very complicated concept (chess), as opposed to, say, verifying a user login. They both have the same fundamentals, it's just one appears more complicated on the surface than the other.
Another question is:
"How to build intelligent behavior from natural (like) language pseudo code?"
I am layering very natural like language above Python code and modules.
This will generate and execute code from english NLL pseudo code.
But, pseudo code is just code if the BNF definition works.
You can see the basics below.
I'm adding this NLL to my Iceberg/Sagent intelligent agent AI.
Theoretically, that will be more like what you mean by "AI"
As the first answer says, "AI as any artifact that makes a decision"
I would elaborate by saying, "An AI performs an action by making a decision"
My Iceberg/Sagent AI uses a layer above python modules turning them into Sagents (or Intelligent agents).
Sagents process data using a type of decision tree methodology.
This will be programmed via instincts, basically built-in (bootstrap) NLLs.
Answer 1 said, "sufficient processing and memory to derive sufficient utility from statistical decision making"
By sufficient, I would agree there needs to be a large body of
instinctive or bootstrapped performance to get the AI to the point
where it can grow its knowledge in an unattended way.
But just as you should never leave a child unattended,
the same is true for an AI. I've had unattended experiments that have gone a bit bat shit. I'm worried about this aspect of AI, that someone will create a dangerous AI and turn it loose. "AI Robot" could be a real issue at some point in the next few years.
Anyway, below is a simplistic thought starter on NLL pseudo code:
Built-Ins |"execute python", code| |"python", code| |"execute c", code| ai_def( "a human sentence" ): sentence = |"python","input("Enter a sentence: " | return( sentence) ai_def( "remove stopwords from", tokens ): stopwords = |"python","nltk.corpus.stopwords.words('english')"| non_stops = |"python","tokens.difference(stopwords)"| return( non_stops ) ai_def( "keywords from", sentence ): tokens = |"python","nltk.word_tokenize( sentence)"| keys = |"remove stopwords from", tokens| return( keys ) ai_def( "add to sayings"): keys = |"keywords from",saying=|"a human sentence"|| |"add to sayings dictionary using", keys, saying| ai_def( "get saying for human" ): subject, verb = |"get subject verb of", saying=|"a human sentence"|| match = |"find saying matched from", subject, verb| report( match)
Anybody interested?
Or, if something already exists, let me know.

- 1
- 1