The Terminator Fallacy

Home Dialogue Page Choices The Terminator Fallacy

This topic contains 8 replies, has 9 voices, and was last updated by  Aferrone 7 months ago.

  • Author
    Posts
  • #5803

    jossarsd
    Participant

    Within the “Bioelectronics” chapter of the book, Bess explains his belief that the apocalyptic events popularized in Sci-Fi movies such as The Terminator can be considered a fallacy. His reasoning makes sense in that he presents the idea that machines will not be able to take over and exterminate the human race because the machines will eventually be incorporated into human bodies themselves (and will therefore not exist as separate entities). With this reasoning it would seem illogical to attempt to halt the progress of bioelectronic enhancements, especially if they appear as gradually as Bess predicts.
    The problem I see with the “Terminator fallacy” argument is that it does not really address the real issue at hand in The Terminator or similar movies: an event that leads to the destruction of a large percentage of the humanity that is caused by a human invention. While I can accept that machines will (probably) not wipe out all humans out of spite or indifference, this does not necessarily mean that some catastrophic event for large portions of humanity is not on the horizon with bioelectronic enhancements. It seems possible to me that a group of bioelectronically enhanced humans (with access to defense grids, launch codes, the stock market, etc.), rather than machines, would be just as capable of judging mankind unfit (for any number of reasons: the enhanced humans see humanity as inherently bad, the enhanced can do calculations that allow them to determine our current population is unsustainable, general arrogance and hatred of those “below” them, a quest for power, etc.) and use their enhancements to exterminate those without enhancements.
    In summary, it seems to me as though just because the technology is within us rather than outside of us, this does not mean that we can rush headlong into bioelectronics without considering a Terminator-esque result, and I believe that any bioelectronic enhancements would have to be carefully considered as to what power we are granting people before they are released to humanity. After all, humans have been proving for thousands of years that they are quite capable of killing each other on enormous scales without the help of bioelectronic enhancements.

  • #5815

    marabeard
    Participant

    You make an excellent point. Ben Goertzel, in “Artificial General Intelligence and the Future of Humanity,” parallels Bess’s assertion that AGI’s will develop alongside humans “moving forward together” (Goertzel 136). While both Goertzel and Bess reflect the sentiment that machines will eventually be incorporated into humans and will therefore not exist as separate entities, I, like you, am unconvinced by this argument. Even Goertzel contradicts himself by contending that eventually AGI’s will outpace human intelligence and humans will merely be serving as “additional processing units” (Goertzel 132) to these AGIs.

    • This reply was modified 7 months, 3 weeks ago by  marabeard.
  • #5829

    Michael Bess
    Keymaster

    This is a very good point. My intent with the “Terminator fallacy” argument was mainly to underscore the low probability of machines being entirely separate and distinct from humans. This argument by no means implies that bioelectronically augmented humans of the future might not unleash frightful violence on each other: unfortunately, that possibility is all too real.

  • #5842

    czyzzzz
    Participant

    I think the terminator factor is something that must be considered. We can make a simple deduction here. Before any type of human enhancements, there are already relatively smart and stupid people in terms of IQ. A person with an IQ of 80 will find it hard to understand a person with an IQ of 120. There are two directions of human enhancements, physical enhancements and mental enhancements. If enhancements just make people stronger, it will have to make a person extremely strong in order to disrupt any current balance so let us ignore it first. If enhancements are in terms of IQ, this is pretty scary. If you are an average person with an IQ of 100, you may understand what a person is thinking if her IQ is 140. However, what would you think of a person that has an IQ of 500. It is very hard to comprehend indeed. No matter it is humans that are going to be enhanced to such a degree or a machine. It is hard for us to predict their abilities. If they deem humans bad or simply think the population is too much, it is hard for us normal people to limit their behaviors because they are too smart for us.

    I think maybe it is not the problem to think how to prevent another being from destroying humans. Maybe it is more important to think what we should do if our humans should not exist, or at least some of us should not exist. Even without a power being trying to kill humans, our humans still kill other humans. The death sentence serves as a device to punish people and eliminate those that are bad for the society. Why is it not correct for a super smart being to decide the life or death of everyone? We know that the earth has limited resources and can only sustain a number of people. Once there are too many people on earth, it is a logical decision to eliminate some people on earth given that we cannot live in another planet. Even if population does not exceed the limit yet, it is still possible the existence of humans are just bad for everything else. If we are bad, should we accept our fate to be eliminated? Or should we fight back? Or maybe we should just hope that the more advanced beings in the future decide that we are good creatures to survive along with them. I think what will happen is that the humans we know of are going to be destroyed and new humans are going to dominate, but the population of new humans is unknown.

  • #5843

    Tony Osorio
    Participant

    Czyzzzz, I cannot tell if what you wrote was a parody of nihilism or just a poorly written defense for murder. While I (nor others) can correctly predict the abilities of future humans, I hope they are capable of better grammar than what you just displayed. But sure, if they are enhanced to a degree that the mass population cannot compete with them, then they will have immense control over the fate of most of humanity.
    However, it is definitely not “more important” to consider if humans should exist. I think that it is impossible to justify the mass execution of humans for any reason. With increased investment and progress in space exploration and falling birth rates, it is completely illogical to assume that we will be overpopulated with no hope to colonize other planets and will have to resort to murdering people. We are not bad and no one should ever accept a fate to have their life unjustly stolen from them.
    I agree that one day humans will look back and not recognize our current species as equals, but I can never picture any stage of human evolution renouncing their past.

    With the original post discussing machines rising to eliminate humans I would like to return to that topic. In my opinion, I support the idea that the integration of machines in society will be very gradual. Machines will help us with tasks like they do today, then they will become a part of us, then finally we might see machines starting to develop as their own entity in society. I picture (almost comically, almost tenderly) the rise of machines in society will be very similar to how a child is raised: at first they won’t even understand their place but, with proper nurturing, machines will grow up and get their own spot at the adult table at Thanksgiving. We might lose control and be destroyed by our inventions, but in the end if robots and man become distinctly separate, I hope we like each other.

  • #5939

    Spencer Robbins
    Participant

    I agree with Tony that the integration of machines into society will likely be a gradual process. That being said, I would hope that we would not create machines that will form a separate entity from humans entirely. Is it not feasible for humans to maintain an “off switch” on machines developed. Would an EMP not be able to be used in a situation where machines begin to take over or go against the human population? I would hope these considerations are taken into account while these machines are developed. I must believe that if we are capable of inventing these things we should also be capable of shutting them off.

  • #5947

    amukalel
    Participant

    I agree with Tony, the gradual integration of machines with society is key to preventing any Terminator like vision of the future. But I do think your analogy of the machine eventually earning its spot at the adult table is predicated on that fact that the intelligence of these machines will develop very much like our own, in that it will have to mature from being child-like to being an adult. I can’t imagine that this is the case just because we would then have rear the machines, which is time intensive and impractical. They will probably be adult level when they are created, in which case you have the scenario of the adult sitting at the kids table, which creates problems in and of itself.

  • #5961

    bobbymallon
    Participant

    While I do agree the robots/machines will not destroy humans if they are incorporated with humans inside the body, I do believe that this incorporation will come after the point that robots will be dangerous. I believe that robots will gain artificial intelligence before they are incorporated inside humans or at around the same time. Even if the incorporation occurs first, the development of artificial intelligence will still occur because there are benefits of having robots and computers that aren’t connected to human beings. For instance, the military can use robots for dangerous situations where they do not want to risk human lives. This is where the terminator factor becomes a problem because the AI would have heavy use within the military, which will potentially give them access to the weapons that could be used to destroy humanity.

  • #5963

    Aferrone
    Participant

    I believe that the trend of a gradual integration of machine intelligence with humans has already begun to occur and will progress in the future. For example, we currently use search engines (which is a form of artificial intelligence) to increase our ability to learn new information. Additionally, as technology is developed to make this integration more seamless (such as Google Glasses), it will only become more pronounced. Furthermore, I believe that through the introduction of institutions such as Elon Musk’s OpenAI, a non-profit AI research company focused on developing open-source friendly AI that would benefit humanity as a whole, the dissemination of AI research will increase our ability to integrate with the technology. Musk’s wish is to get AI technology into the hands of as many people as possible. Therefore, by increasing one’s ability to access it, I believe this integration of human and machine intelligence will be further sped up.

You must be logged in to reply to this topic.