I think that a large hunk of the worry about AGI isn't because of any inherent properties or unknown unknowns of AGI per se; rather, the worry is based on inherent properties and known knowns of humans. Even the most cursory examination of history shows that humans are perfectly capable of committing the most horrendous atrocities all the while bathed in a burning sense of self-righteousness.
Indeed, as often as advances in science and engineering have revolutionized peace and prosperity, just as often have they revolutionized desolation and war. Bronzework enabled the plow, but also the sword. Shipbuilding and Astronomy connected the two hemispheres, but brought death and disease. Industrialization brought an end to peasantry, but contaminated our water and our air. Dynamite yielded to us the riches of the earth, but brought destruction from the skies. Nuclear physics promised both limitless free energy and complete and utter annihilation - so far it hasn't delivered on either, but time will yet tell.
Given these and countless other examples from our history, it is clear that great leaps in science can have a great effect on humans' ability to improve life, or to inflict death. Undoubtedly, the creation of AGI will be a great, probably the greatest, leap in science yet achieved by humankind. As such, it will bring untold opportunities to improve our abilities in many ways: for peace, but also for war. Will the use case for war ultimately win the day? It is hard to tell from this juncture. However, it seems obvious that, if AGI is very useful for waging war, and war is waged, then AGI will swiftly find itself on the front lines. Should we worry about AGI because AGI will bring war? Probably not. Should we worry about AGI because we are liable to find ourselves at war all on our own and that AGI may enhance our abilities to destory in ways previously unknown? Absolutely.
Possibly also there are commons that we have hitherto taken for granted that AGI may significantly improve our ability to ruin. In the early days of industry, the atmosphere was generally taken for granted. Local effects were witnessed, but the idea that we might ruin the atmosphere on a global scale was not a serious one. Only in retrospect do we see that this was naive. What commons exist that we have simply not had the means to ruthlessly exploit for profit? One that has been in the news lately is the space of information. People rely on the internet for information - it is a "commons" in some sense. Already people are learning that this commons can be exploited for financial or political gains. Will AGI be the "industrial revolution" that enables us to improve our exploitation of this commons by orders of magnitude? What other commons exist that might suddenly find themselves under siege in an age of AGI? Will we end up with the infamous paperclip maximizer, in some other guise?
Of course, I don't mean to come off as a Luddite, but probably we as humans should (for the first time in history, I think) try to foresee some of these issues and regulate them at an early stage, rather than when the problem has already gotten out of hand. One way to help with this from an "unknown unknowns" perspective is to simply advance slowly, allowing time for our plodding regulatory frameworks to catch up. I think that this was the spirit of the recent call to pause AI research.
The point of this letter, and this attitude for caution, isn't, in my humble opinion, because we need time to figure out how to make AGI non-dangerous due to inherent issues with AGI. The need is to actually arrange our human institutions in such a way to prevent humans from being dangerous with access to AGI. So, any "reasons to believe that AGI will not be dangerous" must, first and foremost, address these human concerns: Are there reasons to think that humans could not or would not utilize AGI to destroy each other? Are there reasons to think that humans could not or would not use AGI to exploit the commons? Possibly, one could argue that we, as humans, have stronger global peacekeeping institutions (the UN, the EU, NATO, etc etc) than ever before, and that these will help prevent any kind of bad actors from using AGI to wage war. In any case, humanity hasn't managed to utterly annihilate itself with science yet, so maybe that should give us some hope.