Of course, there's really not much to refute. It's Charlie's opinion that such a thing is unlikely. What's the big deal. You'd have thought he traveled back in time and performed an abortion on the baby Jesus, or something. And actually, his argument really boils down to this:
"human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way. Enhancements to primate evolutionary fitness are not much use to a machine, or to people who want to extract useful payback (in the shape of work) from a machine they spent lots of time and effort developing".So, what I get from this is he's saying either 1) a machine intelligence based upon human architecture is probably not the most optimal machine intelligence you could make, or 2) a machine intelligence based upon a human architecture is probably going to be no damn better at anything a human can do already. Furthering the argument would be tafter wasting time to make a human intelligence, the next step - a superhuman intelligence - from the human template is an equal waste of time.
What he is not saying is you can't build a thinking machine, or that there is no point in trying, which seems to be lost on his detractors. Any arguments in this direction are therefore irrelevant.
My objection to either 1) or 2) above is: So Fucking What?
I can find plenty of sub-optimal designs in nature (many of them existing in our own bodies) that are, well... sub-optimal designs. Kluges, Rube-Goldberg-like biological contraptions with barely functional features pressed into service via chance and contingency, exist all throughout nature. They are sub-optimal, inefficent, grotesque, gangly, awkward, clumsy, even downright butt-ugly, and yet, there they are, surviving, even thriving, in the universe. So, so what if you use a beach ape operating system to manipulate information in a computer persona or entity? It would seem that Charlie's objection is more aesthetic than logical, and therefore, easily dismissed.
What about 2)? Is it worth it? What if, upon a huge amount of time and effort, you end up with a human intelligence that basically tells you to fuck off? Well, anyone who's raised kids, or been near someone who's raised kids, knows that that possibility cannot be ruled out. (Assuming that you've raised kids for the inhumanly selfish sole purpose of getting some work out of them). And are there better means of cranking out robots or computers which are not sapient and yet will perform whatever task it is you want? Again, yes, of course. But there are just a few, just a very few, instances where it would be advantageous to have an artificially intelligent generalist around to do work in dangerous circumstances, where it would require even more time and effort to equip a human to survive. Deep sea or space travel springs to mind. Again, Charlie's objection seems more aesthetic than practical or logical.
Now, I can think of a good argument for not pursuing hyperintelligent AIs, which is the existential threat. I build an intelligence which decides that I'm the useless one. That scenario I think we need not go into. But I think it an unlikely one, and probably more out of an arbitrary choice than anything else.
But I do think there is one exception to Charlie's objections which I haven't heard yet. I've mentioned in the previous essay that perhaps we are a bit more optimal than Charlie goes on about - given that we have these tool incorporation adaptations, and, if you consider experiments with virtual reality body configurations etc, a flexibility to be something other than human. There is good evidence that this thing we call our mind is constantly simulating ourselves. We are dynamically updating our identities. We are actively seeking out data (within the body and outside it, including our accoutrements and possessions) to simulate us. And I think we are generalist enough to handle cyberspace much more fluently than is generally surmised.
But another refutation is: Love. Yes, my loves, Love.
If you think about it, we already are kind of a superorganism as a society, which suggest a more than human intelligence operating out there. (I would even surmise that occasionally we form collective - usually psychotic - personalities, like, for example when Europe, or part of Europe, briefly became an entity called Caesar, or Napoleon, or Hitler).
We more than are our embodied brains simulating us and our incorporated environs. We also incorporate into ourselves our loved ones. I'd love to take credit for this, but hear out Doctor Miguel Nicolelis, in his book "Beyond Boundaries" (page 219):
"Although experimental evidence remains scant, I firmly believe that, in their perfectionist drive to achieve the ultimate simulation of the self, our brains also incorporate, as a true part of each of us, the bodies of the other living things that surround us in our daily existence. the refined neural simulation routine I am proposing may be better understood if we call its final product by its more colloquial and well-publicized name: love"Right now, I've family in Wisconsin, Indiana, California. I've a niece in Kansas City, Mo. I've friends in places as diverse as Texas and Hawaii. I'm a continent straddling Colossus. I've a sister-in-law in Paris, France at the moment. Which bumps me up to international status.
I have loving memories of uncles and aunts and grandparents living and long departed. I've, through them, memories and stories that span a century or more. I've read books. I've watched movies. Heard songs. Enjoyed poems. Examined paintings. My memories, metaphorically certainly, in actuality quite possibly span the entire age of the universe.
I am more than just who I am. And so are you. I think, to simulate us in a computer, requires much, much more than just my current connectome. To simulate me, you might just need to simulate my world, my universe.
A superintelligent AI from me, or someone like me, alone? Unlikely.
But me, plus me and mine, plus me and my world...?