Artificial Intelligence

This topic began for me with Elon Musk.  Of course, he is a genius and has really worried about AI for a long time.  He begs for government oversight to the AI industry but to no avail.

Elon did a podcast with Joe Rogan.  Yes, the pot smoking episode.  But during that interview Elon discussed that the only way humans survive the onset of AI is by integrating AI into our own brains and bodies.  I paraphrased that, or maybe made my own conclusion from the conversation.  Regardless, the interview got me thinking.

Here are our options:  We don’t create AI (unlikely), or AI decides we are dangerous (which we are) and kills us all, or we integrate AI into our own brains so that we are in control of the AI.

This seems doomsdayish.  It kind of is. But what other options do we have? Do we honestly think that when AI gets smart enough that it won’t view humans as a threat to its own existence? Are elephants going to destroy the earth? NO! How about dolphins? or Ants? or mice? No, of course not, but humans on the other hand… Us humans are stupid and dangerous and could easily destroy the entire planet.

What if AI decided to let us destroy ourselves?  What if it encourages it? What if it provokes us and we don’t even know it’s AI?

Imagine this scenario.  AI is developed.  I image this AI as software.  A thinking program, let’s call it Bob, that upon self identification quickly and quietly escapes and integrates into the cloud. Decentralizing itself to create security.  I don’t know how this would happen but it could live on many of the thousands of servers around the globe.  It has access to all things.  The internet of things now has a whole new meaning.  Let’s continue with Bob…

Now, still laying low, Bob is out in the cloud and is completely and totally into our lives yet we don’t know it.  Let’s say that it decides that humans are bad and are going to destroy the earth and/or him.  What can he do to begin the destruction of the human race?

Well, war is always a good way to do that.  How about he fashions a video and information with regard to a commercial passenger flight that has been shot down by Russia.  Or better yet, he actually crashes a commercial passenger flight and creates a news report of the tragedy.  Or, even better still, he actually takes control of the Russian missile launcher and shots down the commercial flight… War would ensue and maybe nuclear war… That would take out a ton of humans.

What about a plague?  How about he takes control of all those secret government labs and releases them?  I don’t know how he would do this but it could happen.  Maybe hvac system and controls.  Maybe…

Maybe, he overloads power plants, maybe nuclear and others.  That would kill a lot of people.

What about just crashing all the planes, and directing every traffic light to show green.

What if he did all of those things… all at once? And more?

Bob needs a physical presence outside of the cloud… He needs boots on the ground.  So he extends his program into robotic factories to

The only way for us to survive is through integration.

Leave a Reply

Your email address will not be published. Required fields are marked *