2

A general AI x creates another AI y which is better than x.

y creates an AI better than itself.

And so on, with each generation's primary goal to create a better AI.

Is there a name for this.

By better, I mean survivability, ability to solve new problems, enhance human life physically and mentally, and advance our civilization to an intergalactic civilization to name a few.

nbro
  • 39,006
  • 12
  • 98
  • 176
  • Not marking any as answer as many people might provide their own take on this, otherwise, one would just see the answer marked as answer and not see other ones which might provide greater insight into AI. – Ashwin Rohit Nov 03 '19 at 15:14

3 Answers3

6

I don't think there is a single standard word or phrase that covers just this concept. Perhaps recursive self-improvement matches the idea concisely - but that is not specific AI jargon.

Very little is understood about what strength this effect can have or what the limits are. Will 10 generations of self-improvement lead to a machine that is 10% better, 10 times better, or $2^{10}$ times better? And by what measure?

Some futurologists suggest this might be a very strong effect, and use the term Singularity to capture the idea that intelligence growth through recursive self-improvement will be strong, exceed human intelligence, and lead to some form of super-intelligent machine - the point at which this goal is reached is called The Singularity. Ray Kurzweil is a well-known proponent of this idea.

Specifically, use of the term Singularity implies more than just the basic recursion that you suggest, and includes assumptions of a very large effect. Plus technically, it refers to a stage that results from the recursion, not the recursion itself.

However, despite the popularity of it as a concept, whether or not such self-improving system will have a large impact on the generation of intelligent machines is completely unknown at this stage. Related research about general intelligence is still in its infancy, so it is not even clear what would count as being the first example system x.

Neil Slater
  • 28,678
  • 3
  • 38
  • 60
  • system x sounds good to me! – nickw Oct 30 '19 at 18:07
  • Would it be fair to mention genetic algorithms? They do not operate on the same scale as @Ashwin is envisioning, but they create multiple different versions of itself and architecture, before testing and searching for which versions are best suited to solve their tasks, etc. – Krrrl Nov 01 '19 at 12:17
  • 1
    @Krrrl: I don't think they are relevant in the context of this answer. Perhaps evolutionary algorithms might be part of a solution to self-improving systems, perhaps not. – Neil Slater Nov 01 '19 at 14:39
2

Direct Answer to Your Question:--

Google uses the term: Automated Machine Learning.


What this Answer is About:--

" ... A general AI x creates another AI y which is better than x. ... " ~ Ashwin Rohit (Stack Exchange user, Opening Poster)

What is the term for this: "A.I. creating A.I."?

-

What is some theory behind this:--

"The AutoML procedure has so far been applied to image recognition and language modeling. Using AI alone, the team have observed it creating programs that are on par with state-of-the-art models designed by the world’s foremost experts on machine learning." – Google's AI Is Now Creating Its Own AI. (2017, May 22). Retrieved from < https://www.iflscience.com/technology/google-ai-creating-own-ai/ >


Layperson Explanation:--

" ... Unfortunately, even people who have plenty of coding knowledge might not know how to create the kind of algorithm that can perform these tasks. Google wants to bring the ability to harness artificial intelligence to more people, though, and according to WIRED, it's doing that by teaching machine-learning software to make more machine-learning software.

The project is called AutoML, and it's designed to come up with better machine-learning software than humans can. As algorithms become more important in scientific research, healthcare, and other fields outside the direct scope of robotics and math, the number of people who could benefit from using AI has outstripped the number of people who actually know how to set up a useful machine-learning program. Though computers can do a lot, according to Google, human experts are still needed to do things like preprocess the data, set parameters, and analyze the results. These are tasks that even developers may not have experience in. ... "

– Google's AI Can Make Its Own AI Now. (2017, October 19). Retrieved from < https://www.mentalfloss.com/article/508019/googles-ai-can-make-its-own-ai-now >

We use programs to write programs.

Researchers often need tools to solve complicated problems and algorithms are often needed. They don't always have the technical experience to do this. This is an artificial intelligence-based solution to the ever-growing challenge of applying machine learning to this problem.

This allows non-experts to engage in predictive performance of their final machine learning models.

There is the potential of "feed-back between systems" when A.I. feeds into A.I., which continues to feed into itself, ad infinitum.


Business Applications and Practical Uses:--

Defer to the book: Automated Machine Learning for Business.


Technical Mirror:--

"What is AutoML? Automated Machine Learning provides methods and processes to make Machine Learning available for non-Machine Learning experts, to improve efficiency of Machine Learning and to accelerate research on Machine Learning.

Machine learning (ML) has achieved considerable successes in recent years and an ever-growing number of disciplines rely on it. However, this success crucially relies on human machine learning experts to perform the following tasks:

  • Preprocess and clean the data.
  • Select and construct appropriate features.
  • Select an appropriate model family.
  • Optimize model hyperparameters.
  • Postprocess machine learning models.
  • Critically analyze the results obtained.

As the complexity of these tasks is often beyond non-ML-experts, the rapid growth of machine learning applications has created a demand for off-the-shelf machine learning methods that can be used easily and without expert knowledge. We call the resulting research area that targets progressive automation of machine learning AutoML."

– AutoML. (n.d.). Retrieved from < http://www.ml4aad.org/automl/ >


Sources and References; and Further Reading:--

  • I strongly disagree that AutoML is relevant to recursive self-improvement. The Mental Floss headline is misleading. Even interpreting it charitably, it is only giving one level of improvement, not recursive. AutoML doesn't create new ML programs, it just automates some training. – jmmcd Nov 05 '19 at 01:12
1

The first thing that comes to mind when reading your question is Genetic algorithms.

They create alternate versions of themselves and measure each versions performance on a specific task, before discarding those that work poorly, while keeping the best ones for their next generation. The mutations here are often random, and for large/complex problems, these simulations can take incredibly long time. This group of algorithms are heavily inspired by evolution and biology, as you can see.

I realize as I read the last part of your question, that this might be on a much smaller scope than you had envisioned. But, in essence genetic algorithms does what you describe in your first part.

For the more grand-scale question, see @Neil Slater's answer.

John Doucette
  • 9,147
  • 1
  • 17
  • 52
Krrrl
  • 211
  • 1
  • 10
  • 1
    Usually genetic algoriths don't "create alternate versions of themselves", but they manage a search within a loosely-defined space. The GA mechanisms in play (interpretation of the genome, assessing fitness, selection, creating next generation) are not the subjects themselves of evoltionary algorithms. They could be, in principle, but you would need to define a fitness function for them, which is hard. – Neil Slater Nov 01 '19 at 14:43
  • I see now that my formulation there was inaccurate. I agree about the fitness function, but I could only think of GA as a collective of methods that "try to find a better AI" - apart from your answer about the singularity - which I admit sounds more like what Ashwin is asking about here. – Krrrl Nov 01 '19 at 15:58
  • Respectfully, can we get some sources to back this answer up? – Donate to the Edhi Foundation Nov 04 '19 at 02:33
  • An answer being counter-intuitive (e.g. linking genetics with A.I.) does not necessarily mean it is wrong/invalid. – Donate to the Edhi Foundation Nov 05 '19 at 10:07
  • @Tautologicalrevelations John Doucette edited in a link to the genetic algorithms page. Sorry for not being precise enough in my original reply. – Krrrl Nov 06 '19 at 15:25
  • I think your answer is interesting. I don't know much about genetics. Best wishes. :) – Donate to the Edhi Foundation Nov 06 '19 at 15:30