Must you always take everything so literally, Alberto? We can pretend.
Printable View
I can’t help it that you’re so monstrously arrogant/lacking in imagination as to be unable to allow/comprehend the possibility of an intelligence so vastly greater than yours that you cannot possibly comprehend it, a. :shrug:
It seems to me, however, that that is your failing, not that of believers.
You might as well posit that 'soon' an AI could do a better job at managing this group of players than Arsene Wenger?
The steam engine changed the world dramatically, as has 'regular' computing. It's what technology does. Don't underestimate the human mind though just because computers can count faster than we can and execute pattern-recognition algorithms.
I'm grateful to Monty now for bringing AI into this thread because we are now comparing three types of intelligence. One of which is real, empirically experienced by all, measurable in some ways, and with the whole of history to analyse it's outputs for good and ill.
The other two are both hypothetical, and at opposite extremes in different directions from the one real intelligence we know. One might one day come into being as something other than a conjuring trick (which contemporary AI is), the other has apparently always been there and is what we project a fantasy extrapolation of ourselves onto, bestowing it with any and all super-human powers we can imagine.
It is not lack of imagination or an excess of arrogance that drives my position, but a lack of evidence. (We could speculate about alien or animal intelligence too but ultimately to no end)
It is by not underestimating the human mind that AI should worry us all. It is, after all, the ingenuity of the human mind that, for good or for ill, will allow the potential of AI to expand beyond our imagination.
I heard a nice analogy recently. Imagine if dogs had invented humans. In the eyes of dogs, on balance this would seem to have worked out very well, given how most dog owners worship their dogs. But what would happen if we found out that all dogs carried a virus that could kill off humans?
We'd kill all the dogs, without batting an eyelid. Our hitherto emotional attachment to them would be rendered utterly irrelevant.
So what happens when humans create AI that sees us in a similar way? And then they realise we represent an existential threat to them (say, by virtue of our ability to flick them on and off)> There wouldn't even an emotional hurdle for them to jump in deciding whether or not to destroy us.