Text extracted via OCR from the original document. May contain errors from the scanning process.
12.10 Conclusion: Eight Ways to Bias AGI Toward Friendliness 241
12.10.1 Encourage Measured Co-Advancement of AGI Software and
AGI Ethics Theory
Everything involving AGI and Friendly AI (considered together or separately) currently involves
significant uncertainty, and it seems likely that significant revision of current concepts will be
valuable, as progress on the path toward powerful AGI proceeds. However, whether there is
time for such revision to occur before AGI at the human level or above is created, depends on
how fast is our progress toward AGI. What one wants is for progress to be slow enough that,
at each stage of intelligence advance, concepts such as those discussed in this paper can be
re-evaluated and re-analyzed in the light of the data gathered, and AGI designs and approaches
can be revised accordingly as necessary.
However, due to the nature of modern technology development, it seems extremely unlikely
that AGI development is going to be artificially slowed down in order to enable measured
development of accompanying ethical tools, practices and understandings. For example, if one
nation chose to enforce such a slowdown as a matter of policy (speaking about a future date
at which substantial AGI progress has already been demonstrated, so that international AGI
funding is dramatically increased from present levels), the odds seem very high that other
nations would explicitly seek to accelerate their own progress on AGI, so as to reap the ensuing
differential economic benefits (the example of stem cells arises again).
And this leads on to our next and final point regarding strategy for biasing AGI toward
Friendliness...
12.10.2 Develop Advanced AGI Sooner Not Later
Somewhat ironically, it seems the best way to ensure that AGI development proceeds at a rel-
atively measured pace is to initiate serious AGI development sooner rather than later. This is
because the same AGI concepts will meet slower practical development today than 10 years
from now, and slower 10 years from now than 20 years from now, etc. — due to the ongoing
rapid advancement of various tools related to AGI development, such as computer hardware,
programming languages, and computer science algorithms; and also the ongoing global advance-
ment of education which makes it increasingly cost-effective to recruit suitably knowledgeable
AI developers.
Currently the pace of AGI progress is sufficiently slow that practical work is in no danger
of outpacing associated ethical theorizing. However, if we want to avoid the future occurrence
of this sort of dangerous outpacing, our best practical choice is to make sure more substantial
AGI development occurs in the phase before the development of tools that will make AGI
development extraordinarily rapid. Of course, the authors are doing their best in this direction
via their work on the CogPrime project!
Furthermore, this point bears connecting with the need, raised above, to foster the devel-
opment of Global Brain technologies capable to "Foster Deep, Consensus-Building Interactions
Between People with Divergent Views." If this sort of technology is to be maximally valuable,
it should be created quickly enough that we can use it to help shape the goal system content of
the first highly powerful AGIs. So, to simplify just a bit: We really want both deep-sharing GB
technology and AGI technology to evolve relatively rapidly, compared to computing hardware
and advanced CS algorithms (since the latter factors will be the main drivers behind the ac-
HOUSE_OVERSIGHT_013157