maxresdefaultSo, this week I have a little bit of a different topic for you, and I have somewhat less than normal to tell you about it. One of the ‘big bads’ in science fiction for a while now has been Artificial Intelligence. Portrayals range from the machine wars in Frank Herbert’s Dune universe to Skynet in the Terminator movies to Hal in 2001. The evil machines from Stephen Ame Berry’s The AI War and Battle for Terra Two have always been a personal favorite of mine (I just like the universe he creates). However, when all is said and done most science fiction writers seem pretty confident that Artificial Intelligence is inevitably a bad thing. However, many philosophers and scientists disagree. Where AI has been the big bad in science fiction, it has been the holy grail of computer engineering for several decades. Even as far back as the 70s there were people working in the field, both trying to build thinking, learning, self-aware computers, and writing about the possible implications of a truly sentient machine, trying to define ‘machine life,’ and wondering if machines could ever really be human.

Now, this isn’t my field, so I can’t give you a lot of details about various theories. However, I do want you to consider the possibilities of 1) a sentient machine, 2) whether a machine could ever be ‘alive,’ 3) how mankind might treat sentient machines, and 4) how those machines might respond. Could a self-aware computer with the ability to learn and grow reprogram itself to write out any code that kept it from acting against its creators? Could sentient machines be programmed in a way that limited their ability to self-will and self-direct? Would they still be ‘sentient’ or ‘alive’ in any meaningful way? Most importantly… is my toaster someday going to kill me, or be my best friend?

As always, I want you to write a story of 1000 words that presents and explains your answer to the question. Have fun!

Advertisements

One thought on “Philosophical Story Challenge of the Week

  1. The most important thing to remember about man-made AI is that it means we will fundamentally be defining the basic desires and ‘instincts’ of these AIs. Why would the AIs try to wipe out humanity if we didn’t give them a desire to live or a desire for power? Why would they enslave us if we didn’t give them a desire to be lazy and conserve their own energy?

    If we make them desire to preserve humanity, we may run into some of the troubles in I, Robot and other of Asimov’s works. There’s plenty to run into, but most sci-fi about AI seems to somehow assume that AI will be fundamentally the same as humans, and that’s probably fundamentally the worst option.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s