The discussion here dismisses Bostrom’s concerns primarily on the basis of his under-consideration of a scaling problem. The book explicitly takes on scaling problems, and combinatorial explosion problems, in considering the comparative likelihood that a superintelligence would be produced by the various routes he considers. Indeed, Bostrom at various times downplays processes that rely on “brute force,” and that is in essence what the charming formulation of Turing’s library represents. He seems more concerned about a series of events that would begin with partial brain emulation and then advance beyond general human intelligence through evolutionary models (though here he also identifies vexing challenges for AI researchers).

While I have no ability to evaluate his speculative accounts of how long various potential routes might take to travel, it’s not quite right to dismiss the entire problem of emergent superintelligence because of the implausibility of one route. I suspect, in any case, Bostrom would be happy to see the discussion, since his primary purpose seems to be to deliver the message that highly advanced AI could very possibly be a dangerous thing.

]]>1. Since the number of minions are the same for both sides, we can sort our side attack power; a’_1 … a’_n . Then, sort enemy minion’s health; h_1 … h_n. Assume both series are sorted in non-increasing order (1st is the largest, nth is the smallest). Because each minion can attack only once, we can clean up iff a’_i >= h_i for all i.

2. I think this problem can be reduced to 2-partition when total attack = h1+h2. What we need to fix is that 63 =! 94. To fix this we just add two very large numbers (for example at least sum of our attack) that has the difference of 31 (let’s say x and x+31) in order to make sure that x and x+31 is on the different side when we do the partition. So this is weakly NP-hard.

3. This one is knapsack, so it is also weakly NP-hard.

]]>