What if Siri and Watson had a baby?

November 5, 2011

Political Theory, Uncategorized


The other day, when I was doing my computer engineering homework, my housemate, a philosophy major came into my room and was very interested in the problems I was working on. As I tried to explain some of the concepts to him, an intense conversation started about how these simple instruments that I was working with were the building blocks of every form of computer that we use daily. This conversation progressed into something much greater when I brought up the concept of The Singularity, which states that at some point in our future, estimated as early as 2050, artificial intelligence will intersect and surpass human intelligence resulting in even greater intelligence growth. We discussed the political and social implications of such an occurrence.

Watson and Siri are probably the two most recognizable artificial intelligences in today’s world. They are extremely intelligent, but nowhere near what will be available in the next couple decades. When the technologies that allow Siri and Watson to function evolve there is no limit on what they will be able to do. They will be able to recreate themselves and increase their own intelligence very easily. This evolution of super-intelligent robots could ultimately make mankind obsolete. What would the superior robots see as the benefit of keeping humans around?

I am not trying to say that I believe that man will be extinct in the next 50 years, but it does make you think about what if.

If robots end up being the only beings on earth, what would this new society look like? Would society revert to a chaotic state of nature like Hobbes said preceded government? Or is it possible that these robots would function exactly the same as humans? Would Sonny from “I, Robot” be the president?


Would Robert Putnam’s principle of declining social capital apply whatsoever in a world where compassion and friendship probably won’t and can’t exist? Would there be any form of social capital at all? I don’t see a reason why a robot would need to make friends or have any lasting conversation with anyone else.

Advertisements
, , ,

Subscribe

Subscribe to our RSS feed and social profiles to receive updates.

2 Comments on “What if Siri and Watson had a baby?”

  1. alexwillard Says:

    You have quite a bit of questionings regarding quite a few theorists. To be honest I have no idea what the future will be like and I don’t know of many people at this university that are specialists in “machine political relations”.
    But for the sake of argument I think it would be much more like a state of nature where every machine seems to only look out for their best individual interest. What seems to tie human beings together is our emotions whether they be love, fear, hate, or happiness. What makes social contracts work is fear of a true state of nature as Hobbes outlines in his argument. Machines don’t seem to have emotions such as fear that are so necessary for things to work such as social contracts, and taking it a step further, social capital. Unless machines developed this sense of emotion I could only see machines fulfilling the functions they were meant to do and only caring about executing these functions. As Tocqueville would say they would be selfish and not care about the other, only themselves and their individual functions, making it a state of nature.

  2. Brian Hall Says:

    There is a lot of good media out there on this topic. The Matrix and Ghost in the Shell both investigate what happens when technology advances to the point that humanity is alienated from each other and the natural world, and machines surpass people in functionality (though they come to entirely different conclusions).

    The interesting thing is that this has already started to happen. Take the self-serve check out lines at the grocery store. Machines are starting to eliminate menial labor. Eventually, it is conceivable that only intellectual jobs will remain for humans. It remains to be seen what we will do to compensate those who no longer have a place in society because robots have taken over their jobs. Human capital would become substantially less important in such a world.

    I remember the essay I had to right before orientation here at UM (for freshman entering in 2010). It concerned robots and the new ways they interact with people, specifically in situations involving emotional connection. The question pondered by the article that was given as the prompt concerned whether it was valuable to form emotional bonds with non-humans? When one forms a connection with a pet, like a dog, there is always an understanding that your pet does not really love you back in the same way (they love your food and the familiarity you present anyway). If a robot somehow manages to surpass the uncanny valley and becomes indistinguishable from a human being, what consequences might that have? People might be subconciously tricked into thinking that there is someone on the other end of the conversation capable of empathizing with them, when in reality they are being tricked into a fake relationship.

%d bloggers like this: