I have trouble finding material (blog, papers) about this issue, so I'm posting here.
Taking a recent well known example: Musk has tweeted and warned about the potential dangers of AI, saying it is "potentially more dangerous than nukes", referring the issue of creating a superintelligence whose goals are not aligned with ours. This is often illustrated with the paperclip maximiser though experiment. Let's call this first concern "AI alignment".
By contrast, in a recent podcast, his concerns seemed more related to getting politicians and decision makers to acknowledge and cooperate on the issue, to avoid potentially dangerous scenarios like an AI arms race. In a paper co-authored by Nick Bostrom: Racing to the Precipice: a Model of Artificial Intelligence Development, the authors argue that developing AGI in a competitive situation incentivises us to skim on safety precautions, so it is dangerous. Let's call this second concern "AI governance".
My question is about the relative importance between these two issues: AI alignment and AI governance.
It seems that most institutions trying to prevent such risks (MIRI, FHI, FLI, OpenAI, DeepMind and others) just state their mission without trying to argue about why one approach should be more pressing than the other.
How to assess the relative importance of those two issues? And can you point me any literature about this?