So, this week I have a little bit of a different topic for you, and I have somewhat less than normal to tell you about it. One of the ‘big bads’ in science fiction for a while now has been Artificial Intelligence. Portrayals range from the machine wars in Frank Herbert’s Dune universe to Skynet in the Terminator movies to Hal in 2001. The evil machines from Stephen Ame Berry’s The AI War and Battle for Terra Two have always been a personal favorite of mine (I just like the universe he creates). However, when all is said and done most science fiction writers seem pretty confident that Artificial Intelligence is inevitably a bad thing. However, many philosophers and scientists disagree. Where AI has been the big bad in science fiction, it has been the holy grail of computer engineering for several decades. Even as far back as the 70s there were people working in the field, both trying to build thinking, learning, self-aware computers, and writing about the possible implications of a truly sentient machine, trying to define ‘machine life,’ and wondering if machines could ever really be human.
Now, this isn’t my field, so I can’t give you a lot of details about various theories. However, I do want you to consider the possibilities of 1) a sentient machine, 2) whether a machine could ever be ‘alive,’ 3) how mankind might treat sentient machines, and 4) how those machines might respond. Could a self-aware computer with the ability to learn and grow reprogram itself to write out any code that kept it from acting against its creators? Could sentient machines be programmed in a way that limited their ability to self-will and self-direct? Would they still be ‘sentient’ or ‘alive’ in any meaningful way? Most importantly… is my toaster someday going to kill me, or be my best friend?
As always, I want you to write a story of 1000 words that presents and explains your answer to the question. Have fun!