In the paper Governance by Glass-Box: Implementing Transparent Moral Bounds for AI Behaviour, the authors seem to be presenting a black box method of testing. Are these ideas really new? Weren't these ideas already proposed in Translating Values into Design Requirements (by Ibo Van de Poel)? Black-box testing had already been proposed much earlier.
1 Answers
Poel's paper on Translating Values into Design Requirements articulates a framework for mapping abstract values and norms into concrete design constraints that an engineer could work with. The example used in the paper is mapping beliefs about animal welfare to design constraints on chicken coups.
The newer paper by Tubella et al. on Governance by Glass-Box builds on Poel's idea (and in fact, cite's Poel several times). It basically suggests that we should use Poel's design process, but that in something like an AI system, we also need to use an "observation phase" to validate the system, because unlike the problem of engineering a chicken coup, an AI system may appear to have met design constraints, but routinely violate them in production.
So, you're right that the Tubella et al. paper is essentially proposing the combination of Poel's framework for translating values into design constraints with the old idea of black-box testing, but this combination itself appears to be a new, if modest, contribution.

- 9,147
- 1
- 17
- 52
-
1But the "observation phase" (along with the feedback loop) for black boxes is not a new concept as it can be seen by various previous papers on black box testing and even the wikipage of blax boxes https://en.wikipedia.org/wiki/Black_box .The authors didn't cite any paper of the numerous regarding black box testing and make it seem that they contributed more than they really did . – Kaspar May 22 '19 at 00:41
-
1@Kaspar This is not really unusual for academic papers, especially in non-technical areas. The paper is basically saying "here's an old idea from engineering, and here's how you could adapt it to work in an AI context." No researcher reading this will think of it as a major contribution, but easily 90% of published research makes small, incremental contributions, so this is not unusual. – John Doucette May 22 '19 at 01:53
-
1Much of research is incremental but very few papers that are so trivial get accepted at IJCAI – Kaspar May 22 '19 at 08:10
-
1@Kaspar I don't have access to the IJCAI proceedings (2019?) to verify this, but it seems likely that this paper was accepted to the Special Track on AI for improving human wellbeing (https://www.ijcai19.org/call-for-paper-AI-for-improving-human-wellbeing.html). Special tracks are often less competitive than the main conference. proceedings. – John Doucette May 22 '19 at 16:13