2

My thinking is you input a paragraph, or sentence, and the program can boil it down to the primary concept(s).

Example:

Input:

Sure, it would be nice if morality was simply a navigation toward greater states of conscious well-being, and diminishing states of suffering, but aren't there other things to value independent of well-being? Like truth, or beauty?

Output:

Questioning moral philosophy.


Is there any group that's doing this already? If not, why not?

nbro
  • 39,006
  • 12
  • 98
  • 176
Otherness
  • 275
  • 1
  • 2
  • 6
  • 1
    "Natural language understanding" and "Auto summarization" are the keyphrases that should lead you to the relevant work. – Ray May 15 '20 at 17:05

1 Answers1

1

Identifying the primary concepts of a paragraph required understanding of the meaning of the text. In natural language processing, we are still a long way off even recognising and representing the meaning of text, let alone summarising the meaning of multiple sentences into a single statement.

Note that this is different from simply summarising a text: this can be done without any understanding based on textual features within the text itself, and ways of doing that have been around for a while. But such approaches will generally remove sentences which seem less relevant to the text, thus shortening it. They will not express the content in different words.

Conceivably people might try this with deep learning, where you train a system with paragraphs and the corresponding concepts, but again such a system would not have any understanding of the meaning, and thus results would be more or less accidental.

nbro
  • 39,006
  • 12
  • 98
  • 176
Oliver Mason
  • 5,322
  • 12
  • 32
  • Thanks. I agree that "understanding" in the way we talk about it might not apply here, but I think "calibrated" is more appropriate that "accidental". I don't think AlphaStar wins games by accident. – Otherness Apr 15 '20 at 18:55
  • @Otherness Playing games is a whole different topic from understanding open ended meanings. You cannot improve by 'playing' against yourself there. – Oliver Mason Apr 15 '20 at 20:48
  • You're right, not having self-play is a big difference. I wonder if there's some equivalent method of self-improvement for a natural language processing program, similar to self-play for gaming programs. Perhaps we need another big breakthrough similar to GANs. I guess time will tell. – Otherness Apr 15 '20 at 21:26
  • And when dealing with human languages, not everyone speaks perfectly. Most things need to be understood within a larger context. The simple command "*Lay down on the sheet!*" should be interpreted by a robot as my request to fetch some feathers for the quilt we are about to make; but, if someone less pedantic than I am says the same command, it should be interpreted as an ungrammatical request to get into position for its crankcase oil change. The robot's brain has to allow for non-obvious errors in the input, and to derive the intended meaning based on additional situational knowledge. – Ray Butterworth May 15 '20 at 13:47
  • Good point, Ray. Interpreting context and imperfect communication seems quite the challenge. In fact, my first reaction to your text command was thinking you were about to start a massage. – Otherness May 16 '20 at 20:58