| gveranon |
07-18-2016 01:29 PM |
Re: Pessimists - What Keeps You Going?
Quote:
Originally Posted by Gray House
(Post 126338)
Quote:
Originally Posted by gveranon
(Post 126320)
Here you seem to me to be on firmer ground again. I think you should be clearer as to whether your ethical judgments inhere in qualities of experience or whether other considerations should be brought to bear. But if you're going to allow for considerations outside of particular individuals' experiences, then I think you should recognize that possible criteria for ethics other than suffering have to be argued for or against rather than simply being dismissed by an utter focus on qualities of conscious experience.
|
Other considerations should be brought to bear, but not independently of the context of a system that affects qualities of experience. Human actions do not happen in isolation from human psychologies and their future impact on the world. Recreationally mutilating a dead body, for example, would be bad because it's an action that requires a harmful type of psychology. And it would be predictive that the person could find it psychologically acceptable to commit acts in the future that cause experiential harms. My ethics are ultimately consequentialist, but I don't limit the category of consequences to being so immediate as to eliminate the importance of people following principles that don't necessarily cause experiential harm in every instance, because having rules that are consistently followed is often consequentially an overall good.
|
Harm-based pessimism, as you depict it, seems so reductive as to be a misdescription of human life. The focus is on the primal level of qualities of experience, but this is not the conceptual level of typical human experience most of the time. Typical human experience isn't even conceptualized as experience most of the time (varieties of acting, goal-seeking, and reflecting are more frequent conceptualizations), and other valuations are often more relevant than whether it feels good or bad. For these reasons, I find approaches to pessimism pursued by Schopenhauer, Cioran, Zapffe, and Ligotti to be more compelling than harm-based pessimism. Those authors are keenly aware of qualities of experience, of course, but they are also concerned, however negatively, with conceptual realms of activity and meaning. If most humans don't usually experience life in terms of harm-based consequentialism, then harm-based consequentialism (a philosophy relentlessly focused on experience) doesn't speak accurately about human experience.
Quote:
Originally Posted by Gray House
(Post 126338)
Quote:
Originally Posted by gveranon
(Post 126320)
I think you should see the electrical brain stimulation/VR scenario as bad, even if, in your words, "the scenario of how the people got into the virtual reality was not experientially negative." I would not choose to be controlled by stimulation/VR myself, but I could conceivably be put into that circumstance while unaware of what was happening or without any negative experience involved (say, while under anesthetic). Your ethic seems utterly immersed in felt qualities of experience again. If you think there should be rules, wouldn't there be a rule against doing something so drastic to someone without his permission, even if he never feels harm at any time in the process? My suspicion that harm-centered ethics could be misused in monstrous ways is confirmed.
|
I agree that consent is important, but only in a context of experiential harms. I'm assuming the entire human race would go into VR at the same time. Also, the human race going into VR would prevent all the non-consensual acts that would otherwise cause real experienced harm, while the "wrong" of the human race going into VR would be a concept which no one would believe, since they wouldn't know they had gone into VR.
|
It's unlikely that the entire human race would go into VR at the same time. New technologies are always adopted gradually. Early, crude VR is already being used here and there. Also, as with other technologies, VR will be used for the usual aims, money and power. If VR can be used to exploit others for these ends, it will be. Harm-based ethics, if it becomes influential, could help to provide cover for this exploitation ("Hey, no one was harmed!"). Your points about indirect harm would be lost; humans are good at ignoring indirect harms, and have to be in order for ordinary life to continue (a reason for pessimism, yes). But you assume that the whole human race will enter VR at the same time. For you, this dispenses with the problem of consent, as long as no harm is experienced. Individuals can be destroyed, as long as no harm is experienced. Again, my suspicion that harm-centered ethics can be misused in monstrous ways is confirmed.
Quote:
Originally Posted by Gray House
(Post 126338)
Quote:
Originally Posted by gveranon
(Post 126299)
My wording there failed to articulate what I was trying to say. I meant to suggest that, with sophisticated electrical stimuation and VR, a person could be led to experience a kind of as-if rationality or ethics, in which thoughts seemed to him to be rational or ethical even though they weren't. A spoofing of rationality or ethics, so to speak. Feeling is not just a matter of sensation; feelings of rightness can attach to thoughts. There is a cognitive psych book titled On Being Certain that talks about "the feeling of knowing." My point was, if all that matters in your ethics is qualities of conscious experience, how can ethical thinking itself escape being seen as just another instance of conscious experience, which could not be judged bad even if it were a manipulated, false semblance of ethical thinking, as long as no one cares or feels any harm?
|
If someone was in VR and it was certain they would never leave VR, there could be a belief in ethics, but the ethics wouldn't really matter. Playing violent video games can only be bad if it negatively affects a person's psychology and manifests as behavior impacting real people or animals. There has to be harms or potential harms for ethics to have a real purpose, and it would be wrong to ensure that there would be risks of real harm just so people could act ethically to stop the harm. That would be making the problem in order to fix it.
|
Good point about ethics needing to have a purpose, which it presumably wouldn't in VR. No, I don't think it would be good to make problems in order to have problems to fix. But the loss of legitimate ethical thinking in VR would mean that the VR setup itself could not be challenged from within the VR setup. That's beyond Orwellian or Huxleyan or Foucauldian, into a type of total control that doesn't yet have a name, as far as I know. But a name won't be needed if the entire human race is in VR!
Quote:
Originally Posted by Gray House
(Post 126338)
Quote:
Originally Posted by gveranon
(Post 126299)
I understand that distinction. But from your wording it seems that you are judging reasons to act, reasons to discover the truth as being rational or not rational depending on whether they are in line with your ethical understanding of the situation that gave rise to the reasons to act. In an earlier post, you wrote, "The capacity of sentient beings to have qualitative experiences gives us reason to act. An action is rational if it is in accordance with that reality." (I.e., an action is "rational" if it is an accordance with the way that your ethics assesses that reality.) In this usage, "rational" is a term of approval that assumes your ethical depiction of human situations (as being simply a matter of good or bad experiences), and this seems tendentious rather than descriptive to me.
|
Some people use "rational action" to refer to someone's acting in their own self-interest. If an action can be called rational in respect to one's own interests, an action's impact on the interests of others should factor in determining if it's rational.
|
But the "rational" self-interest of an individual and the interests of others are often in conflict. And, at least in most situations, it is impossible to determine the interest of the whole, because individuals within the whole want and need different things. Your ethics tends in the direction of a ruthless collectivism, which, I suppose, is appropriate: Extinction is collective by definition (though nothingness isn't).
|