I notice the Senate failed to pass the Keystone XL pipeline project. That's all right. They'll do it next year. It's not like we don't have oil pipeline leaks anyway. We have
hundreds thousands of leaks a year, most of them unreported. Keystone XL will leak the hell over everything, just like all the others do.
First of all, any engineer will tell you that there is no such thing as leakproof. I've heard tell that, on average, even the best pipeline will have at least one very small leak for every mile of pipe. That wouldn't surprise me. Multiply that by tens of thousands of miles, and you get the idea. If you live in Nebraska and Kansas, you've already got some really hideous chemicals in your rapidly depleting aquifer anyway, so what's a few more volatile toxins?
And anyway, the way solar is going, and, until oil prices goes up, rooftop solar will start to seriously compete with natural gas right about the time Keystone XL is completed. And so what if Canada rapes their landscape?
See that's the thing I've noticed about the group think of group minds: just how fucking stupid it can be.
Are you worried about AI, hyper-intelligent killer robot swarms, massively vast collective consciousnesses, the way Elon Musk and Stephen Hawking are? Don't be.
We've had these things - these organizations - since at least the time of the Sumerians, and, true, they make life miserable for a lot of us citizens, and they do seem to be hellbent on wrecking our habitable living spaces, but they sure don't display much in the way of real intelligence, even a planned malevolent intelligence actively trying to commit genocide (can't even manage to get that right).
See, the thing I've noticed about these collective entities is that seem to get their tasks down despite themselves. It's not that there are not neat things happening. It's like, if you peer inside a black box you see all sorts of clever and wondrous elvish behaviors and tactics and coping mechanisms, but when you look at the black box, it just takes the inputs and stupidly poops out the outputs according to some least common denominator version of cheap-ass processing.
Mine is admittedly an anecdotal experience, but I've worked for five successful multi-billion-dollar corporations, and the thing I notice was they all made money despite themselves. It was always weakest link in the chain kind of stuff (and granted, these are hierarchical collective command structures, but a more distributed network still has the same weak links. I'll go further, and suggest that these links are
made weak by the very globally emergent properties of these collective minds.
You know, I think I started to think about this back in high school, or perhaps before. It may have been exposure to
Carl Jung's notion of the collective unconscious, but no, it was science fiction reading. Maybe Jack Vance, but for sure Fritz Lieber. I still remember a (paraphrased) line from
Leiber's "The Foxholes of Mars" where the protagonist, battling his alien enemy, has the feeling of the two of them being "epithelial cells scraped off the skin of two warring monsters".
And that, more than anything, gave me the idea that
bigger and smarter is not necessarily bigger and smarter. True, you see it in ant colonies, where the behavior of the super organism is much sophisticated than an individual ant, and true, humankind as a whole has done things like land probes on comets, but then, you can also get ants to engage in a death march around the rim of a bowl, and humans to (maybe someday) annihilate themselves in nuclear armageddon.
So,
it came to my attention that AI was bandied about over at Edge.org. I haven't been able to read edge for at least ten years now. Those fuckers can't seem to put forth a substantive discussion without using ten words where one will do. And once you condense what they are saying into a more palatable and digestible form, it's usually old wine in new bottles. I
never rarely seem to learn anything new from these smart fuckers. Okay, enough snark.
Anyway, that's what prompted me to write about this subject. It doesn't mean that a superior collective intelligence isn't an existential threat, it's just that this is not a new threat.