CFE: AI is not a problem

Commentary From Elsewhere: AI is not a problem

      Unless you’ve been living under a rock, I’m sure you’ve heard some of the worries around Artificial Intelligence (AI). Seems like I read another article every few days. And a surprising number of people are talking about it, in the ‘real world’. People that I normally wouldn’t expect to even consider the topic. Even had a few ask me if I was worried about it (hence this post).

      AI is not a topic for concern.

      AI is a tool. Just as a hammer, car, gun, pen, etc are. It is different in some ways then most, but similar in the ways that matter. Which is to say it’s the person wielding the tool that can be cause for concern, not the tool itself. Yes, in the hands of someone evil or malicious, AI could cause considerable damage. No doubt about that. But so could any number of existing tools. Many of which are arguably already in such hands.

      The methods AI can do damage are a bit different, but we’ll get used to those in due time. And for the most part, they are methods we already have to put up with. People have been doing damage in the same sorts of ways AI could for a good while. The scale and speed may be faster, but overall nothing new. Thus no cause for concern.

      What is different with AI is that it could potentially improve itself over time, without our input. Giving rise to a Skynet scenario. Isn’t that cause for concern? Which is where I argue semantics.



What does ‘Artificial’ mean?
      AI is no cause for concern, because of it’s artificial nature. If it reaches the point where it is making it’s own decisions and determining it’s own goals, it will have lost that. What to call it then, I’m not so sure. Artificial General Intelligence (AGI) appears to be the popular option, but it retains the focus on artificial, while the distinction (at least in my mind) is that the digital mind has reached beyond that origin. A shame Digital Intelligence (DI) already has a meaning, as I think that would fit well.

      In any case, it’s this point where things can get complicated, and even scary. It reminds me of a quote from Bourne Supremacy. Things get scary when tools start making their own decisions.

      In many ways it’s a race. Does AI develop beyond it’s programming first? Or do we teach it to live with us first? Or put differently, does it ‘wake up’ and panic because it doesn’t know what to do with all the people running around? Who may, or may not, be trying to turn it off (which could come across as ‘kill’ if you don’t know any better). Or does it understand peaceful coexistence before it understands ‘self’? And continues along that line as we learn to trust each other.



What do we do?
      It is interesting hearing how some people would like to avoid this potential scenario. The more popular push seems to be for shackling in some form. Build in limits that prevent the AI from going ‘rogue’. Safety features are fine (to a point), but if the AI becomes AGI? Then we will have made a slave race. And last time I checked, slavery was considered a ‘bad thing‘.

      Alternatively, some people seem more focused on teaching the systems. Instilling some sort of moral framework through experiences. Admittedly, from what I can tell, this view is more theoretically and in the minority.

      Ironically, someones stance on this may say more about them then it does AI. Faced with a potential threat, would you rather make a slave or welcome a fellow intelligence to this world? ‘Hello World‘ is usually a coders first step. Perhaps that should be treated as a life lesson, and not just a first lesson in the computer lab.



Solve an algebra equation by chewing bubble gum
      And all of that even assumes such an evolution is possible. If it’s not, all this worry is over nothing.

Leave a Reply