Let's get right into this, shall we? We're going to be talking about the potential benefits or cons of AI. We all know people are focusing on this shit "for a better future." But will it really be better?
Potential benefits:
. Enhances Efficiency
. More Creativity
. Possibly Increase The Number Of Jobs
. Possibly Solve Problems We Can't
Potential Cons:
. Discrimination
. Faster hacking
. Automating terrorism
. Propaganda
Now, these are just some pros and cons. And some of this shit may never happen, this is the far future we're talking about here. My concern is AI getting in the wrong hands. They could easily be programmed to do very bad shit.
But we can't deny that AI could also do great things. But is that a risk we should make?
“Computers can, in theory, emulate human intelligence, and exceed it… Success in creating effective AI could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it.” - Stephen Hawking
Now, I ask you OffTopic, what are your concerns on AI(if you have any) and do you think it's worth the risk? Or do you think it will be humanity's last mistake?
-
I think human-like AI is best kept almost exclusively in a virtual space. And I'd love to see that human-like AI someday, specifically for games it sounds like a good idea. In a certain game you're given an AI companion that basically has human intelligence and is aware they're an AI, they can also do some special things you or other not-AIs can't. At one point in the game they just forget something, not erasing a part of their memory storage or anything, they just can't recall something like any human being can't remember an event sometimes. A dude asks how that's even possible, for an AI to forget something, but another AI comments that an AI forgetting something shows just how advanced it really is. I'd like AI to go in that direction.