Today, ten seniors from my high school presented last lectures, a reflection on life and more at the end of high school. This is my speech. It wasn’t chosen – it was replaced by far better last lectures – but I thought it was a decent effort. Whelp here it is.
“Somebody once told me the definition of hell: On your last day on earth, the person you became will meet the person you could have become.” — Anonymous
What if a demon slithered up to you after your graduation, and forced you to relive all of highschool?
No, not just the dances and the weekends, but every single moment. Would you scream, fall to the ground, and curse the demon? Or would you call him an angel and plead to do it again? Are you ready to walk down the middle school hallway an infinity more times? Are you willing to put on a polo every morning until Maeser’s logo is permanently stuck to your chest?
Along with the joy, are you willing to reenact the routine, the broken relationships, the depression, the rejection, the failures, the boredom, the ugliness that comes packaged with every beautiful experience? I don’t care if you’re a senior or a 7th grader – could you affirm your life so far without a doubt? Could you live it again and again until each second becomes familiar? Do you want this – all of this – again, and countless times more?
If not, then start living so you can say yes to every single moment.
How can you live this way? Well, as usual, the answer you need is not what you’d like to hear: Don’t be yourself. Be more. Dare to reach outside of what you normally call yourself.
For most of highschool, I was exactly what others expected of me, and what I expected of myself. I was never anything but my usual self. But I needed to let go of those comfortable habits that are wearing away at my potential; I needed to I stop being me and start being more.
Stop wandering the halls aimlessly, thinking of people in terms of stereotypes and first impressions. Stop looking at a person and assuming you know who they are. Start learning names and seeing eyes.
Stop walking past the teacher’s door, glancing in, afraid to enter. Start building relationships with these spectacular people who can be your inspirations and your mentors.
Stop ignoring your Socratic books and stop skimming sparknotes the day of the test. Start learning so passionately that you can’t remember what the assignment was.
Stop listening to to music you don’t love because you feel like you should. Stop pretending you’re a baller because you wear joggers with fake yeezys. Stop thinking you’re edgy because you use words like ‘edgy.’ Stop bragging about how much you’ve procrastinated. Stop thinking that you have enough time. Stop being yourself, and start being more.
In every typical suburban Utah house, right in between the picture of six kids and the religious painting, there’s a cute little sign that says: Remember who you are. But in some ways, that’s bad advice. So much joy can come when you who you’re supposed to be just slips your mind.
Some of your greatest moments will come when you forget who you are. The only reason I tried out for the soccer team was because I forgot I was supposed to be the awkward kid who reads philosophy books and talks too much in Socratic. Because I stopped being myself, I suffered through hours of sprints and San Diegos; weeks of stinging turf burns and soreness; food fights and brawls on a cramped bus; the painful inadequacy of missed passes. At the end of it all, I found unexpected strength and friends I never would have had otherwise.
The only reason I took AP Calculus is because I forgot I’m bad at math. I barely kept up in Math One; I was mocked in elementary school for fumbling at long division and staring blankly at equations. For me, watching people manipulate numbers was like chasing cars on the freeway; I couldn’t possibly keep up. But eventually, I decided to stop being myself, and I signed up for Calculus.
Do what you can’t. Whenever there is something you are afraid of, something you would love but you know you’ll never do – that is what you need to do. You know what I’m talking about. The girl. The performance. The adventure. Stop avoiding them. Start scaring yourself every day. If you jump off the cliff, you have no need to explain yourself to those who stand and watch.
Don’t be yourself.
Stop ‘discovering yourself.’ Stop waiting to stumble upon who you really are. You are more than just you; who you are striving to become. As Nietzsche said, “your true self does not lie buried deep within you, but rather rises immeasurably high above you, or at least above what you commonly take to be yourself.”
Stop looking for yourself and start building yourself.
Philosophy has been firmly and comfortably institutionalized. It exists primarily to teach useful, marketable, career-building skills, buzzwords like critical thought and complex reasoning and clear writing. Philosophers are another cog of capitalism, mass-produced in university departments to either join the economy or perpetuate the academic study of philosophy. Like physicists and biologists, they are judged by their ability to produce original, peer-reviewed work. Philosophy is a specialized field along with the sciences, practiced in research institutions.
Any ‘serious’ philosopher is found solely in the university; no longer can philosophy be practiced by any audacious questioner. Measured by gross product, philosophy is more successful and productive than at any other time in history. Thousands of philosophers throughout the world produce well-researched, logical papers that engage with the traditional problems. Many of these papers are ingenious marvels of philosophical reasoning. Who can deny the brilliance of Slavoj Zizek’s cultural criticism, John Searle’s philosophy of mind, the late Hilary Putnam’s rigorous analysis?
And yet perhaps we are missing something in this Cambrian explosion of philosophical activity. Philosophy has gained a permanent place in the academy. To paraphrase an introduction to “Socrates Tenured” by Robert Frodeman and Adam Briggle, philosophy has its own arcane language, its hyper-specialized concerns, a network of undergraduate programs, and an ecosystem of journals. Like Bertrand Russell, I wonder whether institutional philosophy is “anything better than innocent but useless trifling, hair-splitting distinctions, and controversies on matters concerning which knowledge is impossible.”
The goal of philosophy is no longer to guide individuals on their attempt to become paragons of wisdom and virtue. Its purpose is not to assist the philosopher in reconciling the absurdity of the world, nor can it instigate the creation of meaning out of this nihilism. It is not meant to be related subjectively to the individual who is actually living the philosophy and experiencing its effects. It is meant to produce knowledge. An honest, valuable task – yes. But it is not enough.
After all, this knowledge is produced in the typical academic fashion. It is based on rigorous, ‘impartial’ research by graduates trained in cleverly analyzing arguments. This research is then applied to produce objective knowledge that has no moral impact. I mean: the knowledge is not meant to make one a better person, but to be used as a “de-moralized tool” for civilizational progress (source). Throughout the process, the producers of knowledge remain separate from the knowledge they produce. It is sterilized, abstracted – without impact.
The philosopher argues passionately for his/her thesis, employing every art of logic available. But what if the thesis is true? No matter. It will not change the philosopher’s life nor anyone else’s. It is merely another postulate verified until refutation, another step on the track to tenure. As Kierkegaard wrote, “not until he lives in it, does it cease to be a postulate for him.” It is ridiculous to imagine the philosophers living by their theses.
All this production and institutionalization is merely disguising a catastrophe. We are facing the same problem in philosophy that Kierkegaard faced in religion: the demise of the subjective relationship. We have lost the duty to philosophy that Socrates felt and in some ways died for, the intimately personal, individual relationship to the content of our study. At his final Apology, Socrates said “as long as I draw breath and am able, I shall not cease to practice philosophy.” He had so fully embraced philosophy that any other form of life was “not worth living.”
Philosophy is not merely a set of practices and shared academic norms. It is a way of living. And if it fails to be a way of living, all its academia will be unmasked as hollow. Then could Nietzsche’s madman declare with certainty that along with God, the Philosopher must be buried as well – “We have killed him — you and I. We are all his murderers” (The Gay Science). Divorced from the lived experience of the philosopher, how can philosophy be meaningful? If philosophy is to have any value, it must be “through its effects upon the lives of those who study it” (Bertrand Russell).
Perhaps no one but Kierkegaard has articulated this problem in all its scope. In his Truth is Subjectivity, he wrote:
Our discussion is not about the scholar’s systematic zeal to arrange the truths of Christianity in nice tidy categories but about the individual’s personal relationship to this doctrine, a relationship which is properly one of infinite interest to him. (source)
To be a philosopher, one must take a radical leap. It is like Kierkegaard’s leap of faith, the jump into the abyss. Namely: one must be willing to live by one’s conclusions. After all, philosophy claims to both describe and prescribe reality, the ethical life, the social sphere. If this is the case, then how should we live differently because of it? If Plato is indeed correct about the immortality of the soul, what then should we do? Every philosophical premise, when carried the full length, has an ethical conclusion. They dictate what should be done.
But even here we encounter a problem. We are supposed to live by our conclusions, yes, but we are also supposed to live by the method of philosophy. And yet this method questions every conclusion. How can we live by a conclusion that can be called into question and invalidated the day after? Philosophy is built upon the dialectic, the constant shift in thought and relentless doubt of each and every premise. An objector might claim that the dialectical is fundamentally antithetical to the meaningful, as a life cannot be built upon an ever-shifting foundation.
Perhaps, then, Kierkegaard’s approach is only sensical in religion. After all, religion is not dialectical. It remains solid, and thus a subjective relationship can be built. A subjective relationship to religion builds a bridge between rock-solid cliffs; a subjective relationship to philosophy builds a bridge between wind and tossing waves. One must be able to stop somewhere if a meaning of life is to be built, and religion provides the stopping-point.
Philosophy, however, is not incapable of providing meaning. It has merely been so often misapplied that it seems impossible to truly live by. The solution is this: one must be willing to set a direction for the dialectic. Kierkegaard himself did not set a rock of Christianity and declare “now, build a relationship with this!” His project was to advance, not end the dialectic. His intention was to “create difficulties everywhere” (Concluding Unscientific Postscript). He sought to push individuals to recognize the flaws of dogma, and through this to create their own relationship with Christianity independent of the traditional Nicene doctrine. But this dialectic was not unguided: its goal was to become a Christian.
This aim, in a somewhat paradoxical sense, could only be achieved by first negating it: recognizing that I am not a Christian now. Thus, the dialectic became essential to the process. The creation of a true relationship with Christianity was made possible only through destruction – through eliminating the bromides and dependence on institutions.
This same process of guided dialectic applies to philosophy. By focusing on a specific end goal, our dialectic gains a foundation. New information does not destroy the foundation. Rather, it clarifies and polishes the foundation and assists in its construction. For example, I may decide my end goal is to become a virtuous person. When I discover that one of my practices was not after all virtuous, this does not destroy my ability to live by my philosophy. After all, my end goal is not called into question. I still aim to be virtuous. But my process has been refined, as I now know one more thing that I should not aim for.
Therefore, we now have a basis for developing a personal relationship with philosophy. Before anything, an aim must be set as the goal of all dialectic. Then, we must live by this aim, constantly seeking to refine and expand it. The aim cannot be called into question. This aim is the fundamental, subjective truth, one that is lived by the individual so intensely that it cannot be invalidated. It must be of infinite significance to the individual. It should not be taught in universities, but pursued by the individual.
We must assume the end goal in order to justify the process itself. After all, if we open up our end goal to justification – and therefore criticism – how can we truly be devoted to it? It becomes merely another premise among many, one that may be quickly invalidated by a new paper in an academic journal. Only through the powerful, subjective force of faith can we believe in the end goal. (To clarify, I do not at all mean faith in the religious sense. You’ll catch my meaning as I develop the idea.)
Faith, after all, is inevitable in life. Over the centuries, the most powerful trend in philosophy has been skepticism. We have certainly not completed Aristotle’s task of classifying and demonstrating everything, but we have made tremendous progress in Socrates’ task of calling everything into question. In a philosophy where there is no convincing demonstration of the possibility of knowledge itself, how can we claim to have a comprehensive system that eliminates the need for faith? Rather, we have merely shown the overwhelming need for faith.
Faith is a teleological suspension in the face of uncertainty. The need for this suspension derives from three fundamental and undeniable aspects of existence. First, we are uncertain at the most basic level, and unable to make a decision based on the process of systematic reasoning. Second, despite our uncertainty, we must make a decision. Third, and too often ignored, we have ends, desires, dreams – things that we feel we must accomplish or at least strive for. We may know objectively that these dreams are irrational, unjustified – and yet every individual feels a need to search after some aim. In the face of this telos, it is unacceptable to merely deny agency and revoke our ability to make a choice, and it is unacceptable to choose arbitrarily.
How can we reconcile these three facts of our existence – uncertainty, agency, and the telos? Only through faith. We ignore our uncertainty, and make the decision based on the telos, the goal. We decide our ends and believe them by faith. Every decision must be directed by this touchstone. It is constantly refined through the dialectical process, but the telos itself is never subject to the dialectic.
In his Existenzphilosophie, Karl Jaspers wrote, “Philosophic meditation is an accomplishment by which I attain Being and my own self, not impartial thinking which studies a subject with indifference.” The philosopher should be intimately engaged with the philosophy; it should not be an object of study, but a way of living that has infinite impact. The goal of philosophy must be reexpressed in Kierkegaard’s terms, as the search to “find a truth that is truth for me, to find the idea for which I am willing to live or die.”
Humans have a relentless tendency to treat individuals as microcosms for the world. If we can identify a certain individual who fits into a group, we generalize this individual and make him/her representative of the group or concept as a whole. When we speak about these concepts or groups, we are implicitly thinking of these fetishized individuals. Thus, the ‘philosopher’ becomes Plato; the ‘drug lord’ becomes Pablo Escobar; the ‘autocrat’ becomes Hitler. These people that stand as concrete symbols for entire ideas are what I call ‘fetishized individuals.’
There is a constant political battle for control over these fetishized individuals. If someone humanizes and normalizes Pablo Escobar, they successfully humanize and normalize the drug trade as a whole. They take control of the image of the drug trade – the vivid, personalized, and individual representation. Then, when someone thinks of the drug trade, they think of Pablo Escobar – the friend of the poor, the anti-corruption, anti-communist activist, the family man.
When another representation is introduced, it is considered in the context of the existing fetish. Thus, it is extremely difficult to argue that El Chapo is terrible to convince someone who has internalized a positive version of Pablo Escobar as the representation of drug lords. Any logical argument is subordinate to their personal ideology-based ‘experience’ of Escobar. Perhaps a poor man heard Escobar gave out money in the streets and built schools for the impoverished; this gives them an emotional attachment – a fetish in a non-sexual sense – to the narrative of Pablo Escobar.
Modern political conflicts have begun using fetishized individuals in more obvious ways than ever before. The most clear example of this is Hitler, for he is the most completely fetishized person in the world. For almost everyone with an elementary education, mentions of autocracy, fascism, dictatorship, and genocide generate immediate images of Hitler with arm raised. One cannot win the ideological battle of making autocracy acceptable until one has made Hitler acceptable.
The first ideological step of neo-Nazis, therefore, is making the fetish of Hitler positive. This can be done in a variety of ways. For example, the extreme right-wing and anti-semitic site Rense.com published a series of images of the ‘hidden’ Adolf Hitler. Using these images of him – holding children, walking in gardens, smiling – makes it much harder to imagine him other contexts. We find it conceptually difficult to unite the many disparate aspects of a person into a single unified identity. How could the same Hitler that ordered the Holocaust also kiss babies? Psychological research shows that cognitive dissonance like this causes tangible pain. The drive to eliminate the dissonance, then, leads some to fetishize Hitler in a wholly positive way.
The Netflix original Narcos powerfully represents our difficulty in categorizing individuals. You see Escobar in a variety of contexts – at home with his family, in drug labs, on a farm working, and at war. It becomes difficult to remember his horrific crimes when he is watching clouds with his young children. We can’t really conceptualize a ‘whole’ person – only the person we are seeing at the time. Uniting all the different Escobars into one unified individual is almost impossible. Ideologies take advantage of this inability to unify, and summarize individuals by a single aspect. For some, the need to resolve cognitive dissonance means forgetting Escobar’s crimes to enable a positive fetishization of his figure.
In the most recent presidential elections made fetishization a key aspect of political strategy. In 2008, Samuel Wurzelbacher asked Obama a simple question about small business tax policy – almost instantly making him a key symbol of the presidential election. He mentioned something about wanting to buy a plumbing company, and the McCain campaign leaped at the chance to relate to an ‘ordinary American.’ They coined his new name – Joe the Plumber – and repeatedly used him as an example in campaign rhetoric. McCain used the symbol of Joe the Plumber to show that Obama was ‘out of touch with the average Joe.’ It didn’t matter that Samuel wasn’t really a plumber and his name wasn’t really Joe. Throughout the campaign, writes Amarnath Amarasingam, “A fictional plumber’s false scenario dominated media discourse” (source).
In the modern election, it seems that the myth of the ordinary Joe has taken hold even more firmly. America has a need to believe in the normal citizen, a 9-5er who wants only find his dreams, stick to his moral standards, and support his family. And yes, this citizen is a he – we seem unable or unwilling to use a female figure as a symbol of American life.
Why do we feel a drive for the ordinary? After all, we are obsessing over the nonexistent. There is no ‘ordinary Joe.’ Every citizen has quirks, mistakes, sins, hidden lies, and extravagant dreams that prevent them from being ordinary. Joe can only exist as an idealized symbol, not a concrete individual. And yet the idea of the ordinary citizen is permanently entrenched in our minds. In some way, many people aspire to be average. This aspect of the psyche creates political battles over the ability to protect the ordinary individual, who stands as a metaphor for the whole American citizenry.
Thus, Ken Bone was created. He was a symbol of an ordinary person – appropriately but not excessively involved in politics, working the day job, dreaming small dreams, providing for the family. He was 2016’s version of 2008’s Joe the Plumber. He represented simple authenticity, the everyman – as his Twitter profile proclaims, he is merely an “average midwestern guy.”
He did not decide to become a meme. The media did not make him a meme; they merely capitalized on the attention once Ken Bone had already gone viral. He was not mass-produced by campaign offices and political propagandists. In an act of near-randomness, he was dubbed a meme by the distributed irrational network of sensation-seeking individuals we call the Internet. The random series of viral creations in 2016 revealed that memes are fundamentally uncontrollable. After Harambe, damn Daniel, Ted Cruz the zodiac killer, how could we be surprised that Ken Bone was crowned a meme?
Ken Bone could not even control what he himself symbolized. He attempted to control his own signifier by consistently exhorting people to vote and make their voices heard. But all his efforts, for the most part, failed. Ken Bone does not symbolize democratic participation. After all, memes are inherently dehumanizing. To become a meme, an image must be dissociated from its reality and turned into something else. In linguistics terms, it’s a sign whose signifier is malleable — the image’s meaning, thus, is created by those who share it. The meme itself has no power over its meaning.
This is the danger of living memes – they are tossed around by the whims of the Internet. And when these whims turn sour, the person suffers. Ken’s slightly quirky reddit history was revealed, and he was painted as a monster.
I expect this process to continue endlessly: an individual becomes a sign that stands as a placeholder for a piece of political ideology. The individual is the object of immense attention, and then is tossed out like discarded trash. We should be careful that our memes do not make us think this is what people truly are. And we should not be surprised when the myth of the ‘ordinary citizen’ is shattered by the reality of the individual’s life and being.
Suspend your disbelief for a moment, and imagine the 6-year-old daughter of a major world leader travels with her father to a major nuclear launch site. She is left unsupervised, and happens to wander into the launch room. There, out of curiosity, she presses the big red button.
This launches a nuclear weapon that immediately kills millions of people. Before the weapon has even detonated, other nations have launched missiles of their own. A single launch rapidly escalates to nuclear war. Billions of humans and nonhumans are killed, and the planet is left barely habitable.
This scenario is clearly implausible to the point of impossibility – the big red button, after all, doesn’t even exist. However, it is a useful archetype that raises serious questions for consequentialism. Consequentialism inadequacy in certain moral issues is clear when the accidental action of a small child leads to immense suffering. I’ll add another example that deals with similar issues, but that is far more likely.
A young boy happens to find a few matches on the floor of his family’s garage. While playing with them and scraping them across the rough floor, one of them ignites. In panic, the child rushes to the garbage and throws the match in. Then, losing interest, he walks inside and finds something else to do. The match lights a fire into the garbage which spreads into the house. The house burns to the ground, killing everyone inside. The fire spreads to nearby houses and kills or injures several more people.
This type of counterexample to consequentialism is demonstrably plausible, as there is empirical documentation of similar cases. According to the Washington Post, at least 265 Americans were accidently shot by children in 2015. Many of these shootings resulted in tragic deaths. Meanwhile, the number of American fatalities due to terrorism in 2015 was about 20, depending on certain counting methods.
In a truly consequentialist atmosphere, accidental shootings by children would be discussed far more than terror attacks – precisely 13.25 times as much. Moral deliberation on an action would be indexed to the amount of pain or happiness caused by the action. But in reality, the ethical issue of terrorism is discussed prolifically, while accidental shootings by children are virtually ignored. Why is this the case? I argue that while the amount of discussion on terrorism doesn’t reflect consequentialism, it does reflect our moral intuitions. We assign greater condemnation to actions not based on the numerical impact of these actions, but based on the intention of the actor, the nature of the action, and the emotional impact of the action.
The probability of child accidents will only increase in our modern world, as dangerous technologies proliferate and become more available, the population expands, and systems become more interconnected. A single accidental action by a child can result in unfathomable pain. However, our moral intuitions indicate that accidental actions by children are not blameworthy. Can consequentialism reconcile this problem?
“The belief that consequences are the only normative property that affects the rightness of an action.”
Or, in simpler terms, an action is made right or wrong only by its consequences. Consequences are the only morally relevant consideration.
Thus, if this essay shows that there are non-consequential normative properties that affect the rightness of an action, then consequentialism is false. I will use primarily an intuitionist approach to prove this claim – that is, showing that consequentialism is incompatible with clear moral intuitions. I will not touch on whether intuitionism is true; I will just discuss the consequences of its assumed truth.
Normative properties are defined as any ethical aspects of an action. This is a simple and non-rigorous definition that would be considered inadequate by many metaethicists, but it works well for the essay. For example, “rightness of intent” is a normative property, as it is an aspect that could impact the ethics of the action. Furthermore, this would be a non-consequential normative property.
What are the relevant normative properties in the examples above? I will consider the following:
Intent – the actor’s purpose or intended goal in a certain action.
Actor – the individual who commits the act.
Consequence – the morally relevant impacts of the action.
Different moral theories place different emphases on these properties; consequentialism is the theory that only the third property is relevant to the rightness of an action.
In the case of the child pressing the red button, I believe we have clear answers as to the ‘value’ of these properties. The consequences are certainly bad. The intent is morally indifferent, as the child did not intend for anyone to suffer nor for anyone to benefit from her pressing the button.
The most interesting property is the second. Our moral intuitions agree that the age of the actor is morally significant. If a child commits a crime, they are considered less morally responsible than adults. This intuition is ingrained in law – individuals are not usually morally responsible until the age of 18. Some religions have an ‘age of accountability,’ which makes people accountable to God for sins after it is reached. Since children are less capable of complex moral reasoning, they are less responsible for mistakes in this reasoning.
Furthermore, there are also arguments for the moral relevance of the age of the actor that are not based on intuitions. For example, the following deductive argument:
P1. One is not morally responsible for what one does not know.
P2. If one is not morally responsible for what one does not know, then people who know less than others are less morally responsible.
P3. Children know less than adults.
P4. Children are less morally responsible than adults.
Thus, when the child presses the red button, and she does not know that this will fire a nuclear weapon, she is not morally responsible for the nuclear war that ensues. This argument attempts to prove that children in general are less responsible, but it can also be applied in any case where lack of knowledge is involved. If someone does not know the consequences or nature of an action, they are not morally responsible for this action.
Based on clear moral intuitions and the above deductive argument, the action of pressing the button is either (1) less wrong or (2) morally indifferent when the actor is very young or when the actor does not know the consequences. Either case means that the actor – a non-consequential normative property – affects the rightness of the action, disproving consequentialism.
It seems undeniable that the specter of fake news has taken control of the media. It seems that we’ve now entered a dark age of journalism, where the fake is indistinguishable from the real. It seems that we have entered an unprecedented era of hoaxing and counterfeiting.
But journalism has never been free of fake news. The Columbia Journalism Review published a detailed history of fake news in the United States. In short: fake news isn’t new, and it has real impacts. For example, people fled the city in droves and marched into public parks with guns after the New York Herald published a fabricated report that dangerous animals had escaped the zoo.
And fake news existed even before Gutenberg invented the printing press. In 1475, an Italian preacher claimed that Jews had drunk the blood of an infant (source). This led a local Bishop to order the arrest of all local Jews, and fifteen Jews were burned alive. The fake story spawned even more hysteria about vampiric Jews, which spread across Europe despite declarations from the Pope to try and end the panic.
Fake news has unbelievable power. In Journalism: A Critical History, Martin Conboy demonstrated its dramatic role in history. In 1898, the USS Maine exploded off the coast of Havana, killing over 250 people. The cause was never explained. The Spanish government, which controlled Cuba, expressed sympathy for the disaster and denied any involvement. The captain of the Maine, one of the few survivors, urged Americans to withhold judgement to prevent conflict with the Spanish.
Regardless, Joseph Pulitzer, editor of the New York World, quickly condemned Spain, claiming that they sabotaged the Maine. The World published a cable showing that the Maine was not an accident – even though this cable was completely fake. Newspapers published imaginary drawings of the explosion – even though no one had seen it. Sales of the World skyrocketed, and the public demanded revenge. Fake news helped start the Spanish-American War. Maybe we shouldn’t be surprised that the namesake of the highest award in journalism, the Pulitzer prize, was a purveyor of fake news.
So is there anything new about the recent fake news? Yes – because Americans are far more dependent on news. News, both print and digital, takes far more forms than at any other point in history – videos, images, blogs, tweets, posts, articles. Almost all Americans can read basic English (source), 84% of Americans use the internet (source), and 79% of American internet users are on Facebook (source)
Never before the last decades has the vast majority of the population been simultaneously connected to a source of instant news. A meme, story, or fake event can spread across the public awareness in a few hours. The fundamental nature of fake news hasn’t changed. It has just become far more common and accessible – just as the modern transportation system allows viruses to spread far more quickly.
Furthermore, perhaps the American public has become increasingly vulnerable to fake news. While this claim is hard to demonstrate and somewhat unverifiable, it’s possible that the average reading level has declined. In 1776, the relatively complex, sophisticated pamphlet Common Sense sold 500,000 copies, roughly 20% of the colonial population (source). Now, less than 13% of Americans are proficient in “reading lengthy, complex, abstract prose texts” like Common Sense (source). It seems that the percent of Americans who can understand Common Sense is smaller than the proportion that owned Common Sense in 1776. Plus, the most recent studies show that American reading proficiency has declined over the last two decades (source). Even among college graduates, the proportion that can understand and reason about complex texts has decreased to less than 31% over the last decade (source).
It’s a viable theory that these two trends – increasing access to news and decreasing reading ability – have shaped a perfect storm for fake news. Americans aren’t as likely, or as able, to make nuanced, reasoned analyses about complex texts. They’re more likely to have access to the oversimplified and sensationalized world of internet news (and news in general). More people can be infected by the virus (fake news), and less people have the vaccine (critical thought). As a result, a single tweet can spawn a flurry of fake news that quickly becomes an accepted part of the American psyche.
However, the concept of fake news is also dangerous in other ways. It has already been used as a political weapon to shut down opposing journalism. The left has used it to deride right-wing sources, and the right has coopted it to attack left-wing news. Already, the LA Times and Washington Post haveclaimed that right-wing sites like the Ron Paul Institute and Breitbart.com are ‘fake news’ (source).
These sites could be derided as biased producers of dangerous propaganda, but this is not the type of fake news I’m interested in. Breitbart may be skewed, but it does base news loosely on actual events. Fake news is completely counterfeit – without referent in the real world. To avoid ‘fake news’ becoming a tool to eliminate enemy voices, we need to delineate the concept clearly and create solutions carefully.
Reports by @CNN that I will be working on The Apprentice during my Presidency, even part time, are ridiculous & untrue – FAKE NEWS!
This is an intro to some of my further research into fake news. This week, I’m going to write another article about the philosophy of fake news, and then one about the solutions to the problem. I’ll try to relate the issue to Baudrillard’s theory of hyperreality, examine the differences between Kantian and utilitarian journalistic ethics, and look back to Plato’s critique of postmodernism. Maybe I’ll even make up some ideas of my own.
Why am I so interested? I think that fake news is a microcosm into the larger issue of the ‘postmodern condition,’ which is what I’m focusing on for my three-week independent study. It relates to the need for classical education, which is what I’m studying in a directed readings class. And it’s a good area for philosophical research that hasn’t been fully delved into.
I love it because it juxtaposes absurd, delusional people against unabashed authenticity. This comparison isn’t exactly subtle, but it is never explicitly said. Jim and Pam become protagonists not because they receive the most screen time, or the story is told from their perspective, or they overcome all their challenges and become exceptional – rather, it’s precisely the opposite. They aren’t heroes. They are merely authentic, and we can only relate to them because they are the only real people within this office landscape of hollow appearances.
Michael’s relentless scrambling to avoid blame, display virtue, and underscore his importance always fall flat. Usually, episodes end with a convoluted explanation from Michael about how he didn’t really fail, how he wasn’t really a bad person. The actual events of the episode, though, create a cringeworthy irony. Michael is never outright condemned as a hypocrite, but he is painted as one by the contrast between his own words and reality.
Dwight indefatigably grapples with the pain of an uncertain existence, where unfortunate realities can’t simply be labeled ‘false.’ He struggles to reconcile lived experience and his emotions with the theoretical constructs he has used to rigorously define the world. For example, he completely misses out on the party while examining the construction of the house. He ignores lived experience if it does not fit his hypothetical framework.
Inauthentic people – and by that, I mean people in general – use elaborate schemes to portray themselves in certain ways and ignore others. In The Office, these schemes are almost as obvious, hilarious, and pathetic as they are in the real world. The Office just points out how funny and cringey they are, usually through Jim or Pam. Now back to binging The Office.
I’m not going to argue for any of the candidates in this post. That’ll come later. For now, I think there are three main factors that should be considered in a president. They are all interrelated, and in order of importance. However, if a candidate doesn’t meet any one of these criteria, it is practically impossible to meet any of the other criteria.
Character – This consists mostly of the the moral standards and honesty of the candidate. If I do not trust a candidate, their competence becomes irrelevant, as it will not be used ethically. Their positions become meaningless because they will abandon policy and ethical standards at will. Character also includes temperament and personality, as an angry, irrational, and unstable candidate is a danger to the world and ineffective in diplomacy.
Competence – The proven experience of the candidate, their intelligence, and their ability to implement policies effectively. If a candidate isn’t politically competent, their policies won’t matter because they will never be implemented. Intelligence is not measured by IQ, but by the candidate’s understanding of the world, their rationality, their education, and their working ability.
Policy – The stated positions of the candidate. If every candidate could be trusted to follow their policy statements exactly and implement them effectively, this would be the only issue. Despite its importance, policy is by far the least-discussed issue in this election.
Electability, for me, is mostly a non-issue. Of course, a candidate must have some chance of becoming president, or we will be divided into minuscule factions and candidates will only have to win a small portion of the vote to take the election. However, “some chance” is a low margin. For example, Zoltan Istvan, the transhumanist candidate, is not on the ballot in any state and is not polling at more than 5% in any state (source). This is below the “some chance” margin, as 25 days from the election, he has no path to the presidency. However, Evan McMullin, an independent candidate, is on the ballot in 11 states (source), has a significant chance of winning Utah (source), and has a growing campaign nationally. If a candidate passes this minimum threshold of electability, we should move on and consider the three most important factors.
Our democratic obligation is to vote for the candidate we support. Otherwise, our system degrades and no longer represents the population, as the contemporary philosopher Slavoj Zizek described:
We have reached the third degree where we devote our intelligence to anticipating what average opinion expects the average opinion to be.
If we do not vote our conscience, we as a population fail to represent ourselves. We do not ‘throw our vote away’ when we vote for an unlikely candidate we genuinely support, rather, we throw our vote away when we do not vote what we believe. We are not voting for ourselves, but for someone else, for the polls, for the average. Popular opinion becomes the popular opinion of what the popular opinion is; democracy devolves into regressive guessing at the average. Furthermore, government is only legitimate when it represents the governed. When we do not represent ourselves, our government becomes illegitimate.
Finally, there are a ton of misconceptions about voting power in our democracy.
First, statistical analysis shows that, in general, your vote has the most power if you vote for a third-party candidate, not for a major party. I don’t really see the point of explaining this, as the linked post explains it very well. I’d definently recommend reading it.
Second, the power of a single vote is extremely close to zero. This election, your vote will probably be around 1 in 125 million. Therefore, the best reason to vote is not really to control the election, but to represent ourselves. Don’t do it merely for the results; do it because you believe in your candidate.
Third, a lot of the time, your publicly expressed opinions matter more than your vote, because these opinions influence a significant amount of votes. Who you support actively matters more than who you vote for quietly.
Fourth, whether or not your candidate is elected is not the only measure of voting power. You could say all the Bernie Sanders votes this year were wasted because he didn’t win, but he still radically influenced the election and changed American politics permanently. Winning ≠ success.
Fifth, when you vote for a third party candidate, you break out of the mold. This draws attention far more than obediently voting for established candidates that adhere to the two-party system. Therefore, votes for a third party candidate are more influential than other votes.
That’s why I don’t think electability matters, and why don’t think it should matter. Vote your conscience this election.
Ayn Rand is generally hated by those who consider themselves altruists. This is because the general interpretation is that Rand is a lone mouthpiece for the doctrines of egotism and greed. While some of her arguments are clearly, irredeemably repulsive, such as her romanticization of rape, some areas are more ambiguous, and some segments of Rand’s writing are genuinely inspiring and valuable. Despite her flaws, I think Rand should be read and understood, and perhaps even quoted – but never accepted as a whole.
As a caveat, I have only read Ayn Rand’s Fountainhead, a few of her essays, and skimmed over the critical response. I’m not a Rand scholar at all, and I’m not sure I want to be. I’ve heard that the Fountainhead is one of Rand’s more mild books, so it may be that my interpretation will change radically when I read Atlas Shrugged later this year.
Two Types of Selfishness
There are two archetypes, idealized characters that serve as pinnacles of two opposing moralities, in The Fountainhead. The first is Peter Keating, an extremely ‘successful’ architect in the sense that he is rich, who graduated at the top of his class from a renowned college and is famous as an architect and celebrity. His life is summarized in this passage:
In what act or thought of [Peter Keating] has there ever been a self? What was his aim in life? Greatness—in other people’s eyes…Others dictated his convictions, which he did not hold, but he was satisfied that others believed he held them.
Others were his motive power and his prime concern. He didn’t want to be great, but to be thought great. He didn’t want to build, but to be admired as a builder. He borrowed from others in order to make an impression on others. There’s your actual selflessness. It’s his ego that he’s betrayed and given up. But everybody calls him selfish. (Rand 65)
In contrast, Roark is an independent architect who was expelled from a major university for not following the widely accepted standards of architecture, and lived a life of poverty because he would only accept work that didn’t compromise his standards. He has intractable standards for his work, and is perfectly consistent with these standards:
The creation, not its users. The creation, not the benefits others derived from it. The creation which gave form to his truth. He held his truth above all things and against all men. (Rand 678)
These are the two types of ‘selfishness’ in Rand’s work: selfishness in the form of Keating, and self-reliance in the form of Roark. Rand is an impassioned advocate of the principles expressed by Thoreau in Self-Reliance: “Nothing is at last sacred but the integrity of your own mind,” and “Envy is ignorance, imitation is suicide.”
The widespread misinterpretation of Ayn Rand stems from a conflation of the first type of selfishness with the second. In no sense does Rand advocate for selfishness in the form of greed for power, fame, or money. In fact, much of the book is focused on criticizing Keating’s mindless, ‘selfless’ greed. Rand analyzes the psychology of avarice, and the perverse pleasure Keating feels when he exercises power over others – “he had influenced the course of a human being, had thrown him off one path and pushed him into another” (67). This type of selfishness is contradictory in the sense that it cannot exist without others. It is entirely dependent.
When Keating didn’t have people to approve of his work, he didn’t have a way to value his work: “…it might be good. He was not sure. He had no one to ask” (Rand 171). Keating’s eminence dissipated when his admirers disappeared. “He was a great man – by the grace of those who depended on him” (Rand 233). This passage eloquently epitomizes this fundamentally dependent form of selfishness:
“He was great; great as the number of people who told him so. He was right, right; right as the number of people who believed it. He look at the faces, at the eyes; he saw himself born in them; he saw himself being granted the gift of life. That was Peter Keating, that, the reflection in the staring pupils, and his body was only its reflection.” (Rand 188)
In contrast, Roark might be seen as ideally self-reliant. His work is his passion, and his system of valuation stems from his self – the Fountainhead. Everything else is external and unessential. Others are a means to fulfill his standards: “I don’t intend to build in order to have clients. I intend to have clients in order to build” (26). He is, in short, the polar opposite of Keating in every way.
It is valuable to distinguish self-reliance, from selfishness, and this is the most important principle of The Fountainhead: “The choice is not self-sacrifice or domination. The choice is independence or dependence” (681). We shouldn’t need to ask another whether our work, our thoughts, our actions are valuable – ultimately, only we can evaluate ourselves. Our attempts to delegate the choice of what to value ultimately collapse. When we ask another for advice, we choose to ask them rather than others because we seek a certain answer – thus, we are still making a choice. Furthermore, our interpretation of any advice is a choice. Advice is an illusion – all valuation stems from ourselves, and we cannot give this responsibility to another.
Now that we understand this distinction, it’s time to criticize Rand. Hopefully there is something valuable left when we’re finished.
The Collapse of Rand’s Morality
On any level of analysis beyond the literary, Rand’s moral system, if it can be called that, is pathetically inadequate. Her ‘ethics’ are summed up by Howard Roark’s statement in The Fountainhead: “All that which proceeds from man’s independent ego is good. All that which proceeds from man’s dependence upon men is evil” (681). Like most generalizations in ethics, this claim collapses upon inspection. It requires an idealism divorced from reality, is riddled with paradox, and leads to appalling conclusions.
First, to Rand, any relationship with others is merely a means to an end – “To a creator, all relations with men are secondary” (680). This is fundamentally counter-ethical, as it treats the only relevant moral object as the self and the fulfillment of the self’s standards. Morality must deal with the conflicts of obligations between multiple selves, not just the interests of a single self. Rand entirely ignores the Other, and thus she does not really have an ethics.
Rand fails to have an ethics in a second way. She describes the need to have consistent standards, but does not discuss what these standards should be.
Keeping one’s standards is necessary, but not sufficient, to be moral. For this principle fails the standard litmus test of morality: Hitler and the Holocaust. If standing by your standards is all is required to be moral, then it seems that Hitler is a paragon of morality. After all, Hitler staunchly upheld his monstrous standards. As a Jewish character in Elie Wiesel’s novel Night said,
I have more faith in Hitler than in anyone else. He alone has kept his promises, all his promises, to the Jewish people. (67)
He did whatever he thought was necessary to keep these promises, and he killed himself before he would forsake the struggle.
Clearly, there is more to ethics than just consistency. The other fundamental aspect of being moral, and the more difficult one, is to develop good standards.
Third, ethical solipsism is contradictory. If I believe that my own interests have value, and I believe others have the same fundamental, human characteristics as myself, then it follows that the interests of others must have value as well. If I accept this syllogism (I do), it becomes impossible to logically maintain the belief that only my own interests have value. Rand provides some insight onto how one should live one’s own life as an independent will, but she is almost completely absent when we inevitably encounter others.
Fourth, lived experience obstructs any effort towards egoism. To paraphrase Levinas, we do not encounter others as objects, but as infinite subjects that we cannot understand, who call out to us and require us to respond. We cannot maintain egoism when we encounter the other. For me, this encounter happened in India:
As we drive, I see a body without limbs, lying in an alley ahead. The rickshaw rattles forward, and the engine pulsates like the heart of a dying man. As we pass the corpse, it moves. It contorts its neck to look up. For a second I see his scarred, filthy face and he sees my washed one. In that instant of connection, my lifetime became worthless. My childhood had been a solipsistic simulation, a life without impact or any real need.
It is impossible, and fundamentally unethical, to live as if others do not exist or do not have morally relevant interests. Life is a matter of interdependency: we are raised by parents, mentored by teachers, taught by the minds that came before us, and forgiven by those who love us. If we believe that these experiences are valuable for ourselves, it must follow that they are valuable for others. Thus, we have an obligation to do the same for others.
Rand on Romance: Love as Domination
The relationship between Howard Roark and his lover, Dominique, is abhorrent. Dominique falls in love with Roark primarily because, as she says, he was the “abstraction of strength made visible” (Rand 205). She seeks to be dominated by him, but she also seeks to find some way in which she can own him – “She thought suddenly that the man below was only a common worker, owned by the owner of this place, and she was almost the owner of this place” (205). Roark rapes Dominique*, and there is a horrendous, Fifty Shades of Grey-esque response: “She found a dark satisfaction in pain – because that pain came from him” (209). Dominique’s only compensation is that she is also in a position of power: “she knew the kind of suffering she could impose on him” (210). Love in The Fountainhead is reduced to essentially a power relation, and almost all affection between Roark and Dominique consists in a struggle for power.
I almost stopped reading the book after these chapters. It would be too far to say that Rand redeems herself later. I would only say that she manages to contradict herself. As the relationship between Roark and Dominique matures, they begin to recognize their dependence on one another:
It was strange to be conscious of another person’s existence, to feel it as a close, urgent necessity; a necessity without qualifications. (Rand 218)
This is love as, well, love – interdependence, relying intimately on someone else without being controlled by them. It is needing another “in the total, undivided way…the kind of desire that becomes an ultimatum, ‘yes’ or ‘no’, and one can’t accept the no without ceasing to exist” (Rand 502). Surprisingly, Rand has common ground here with Levinas, the radically altruist philosopher:
Love remains a relation with the Other that turns into need, transcendent exteriority of the other, of the beloved. (Levinas)
This is love not just as a choice or a desire, but as a need.
However, I am cherry-picking Rand. In context, her quotes are much less redeeming. In the same conversation that Roark describes love as an ‘ultimatum,’ he says:
I love you so much that nothing else can matter to me, not even you. Can you understand that? Only my love — not your answer. (Rand 502)
Roark argues that love itself is valuable, regardless of whether the love is reciprocated. This, of course, justifies rape. Without the other, what is love? It is merely possession of an object. After all, “If one could possess, grasp, and know the other, it would not be other” (Levinas). If we accept Rand’s perspective, the people we love are only alter-egos or figments of our own ego; they have no consciousness of their own, and can only be defined in the context of our self.
However, love can only be understood as a relationship between two subjects, and when either is treated as a means or as an object, the bond dissipates. What constitutes love is reciprocation – without this, it is only domination.
Aesthetics of the City
As The Fountainhead is a book about architects as well as a book about philosophy, it asks and answers several aesthetic questions, especially the foremost one: “What is beautiful, and why?”
I have always thought that nature is beautiful. I spend a lot of my free time exploring the mountains, usually on a bike. To some extent, I do this as an escape from the city. Most people would agree that the city is ugly, a scar on the land, a destruction of beauty. However, when describing the experience of looking up at a skyscraper, Rand argues that the city, the product of human creativity, is aesthetic:
It makes him no bigger than an ant–isn’t that the correct bromide for the occasion? The God-damn fools! It’s man who made it–the whole incredible mass of stone and steel. It doesn’t dwarf him, it makes him greater than the structure. It reveals his true dimensions to the world. What we love about these buildings, Dominique, is the creative faculty, the heroic in man. (553)
The fact that humanity created these magnificent buildings is itself beautiful, and these buildings represent this fact. They stand as a testament to our potential. If this is the case that the city can be beautiful, and beauty is worth protecting, then we have an obligation to protect the city through urban development, restoration, and preservation, just as we have a more widely-accepted obligation to protect nature. Now, I do not go into the mountains just to escape the city, but to discover and re-experience a different dimension of beauty.
One of the qualities of the beautiful is that it inspires us to look upward, towards a higher potential. We often think that what makes something beautiful is that it draws our gaze upward, but perhaps this is not the case: “He wondered whether the particular solemnity of looking at the sky comes, not from what one contemplates, but from the uplift of one’s head” (Rand 553). The sky itself is not what is beautiful, but our desire to understand the sky, to reach towards and above it. What creates beauty is not the object’s attraction, but the way we are inspired by the object to achieve our potential.
Should we read or reject The Fountainhead?
Daniel Taylor, author of The Healing Power of Stories, defines a ‘bent book’ as a story that portrays evil as good and good as evil (Taylor). He ultimately concludes that we should never read bent books. There are several problems with this binary.
First, of course, it presumes that we already know what is good and what is evil, an assumption that skips over all the dilemmas of ethics. Often, we read to seek after good and reveal evil in their hiding places – but we do not already know where they are hiding when we start. Reading is not just a process of reinforcing our standards, but of developing our standards. It would be dogmatic and arbitrary to automatically reject all books that seem bent, and it would presume that we are an absolute moral authority that has the ability to judge objectively whether a book is good or evil.
Second, it is not necessarily true that bent books will result in greater evil. Before I read The Fountainhead, I assumed that morality consisted of interdependence and altruism, merely because this is the most common conception of morality. Ayn Rand forced me to analyze and justify this belief. This is essential, for any belief we have justification for has more binding force than a belief we merely accept. Furthermore, through bent books like The Fountainhead, we are able to recognize the reverse of our values. If we begin to see this perversion arise in ourselves, we are able to root it out immediately. Thus, bent books encourage us to define ourselves in contrast to evil, thus promoting ethics.
Finally, the book has certain areas that vindicate its failures. It has an excellent and unique style of writing and storytelling, and this itself is valuable, for our method of expression can be almost as important as our content, and reading good writing allows us to write well. Even much of the content of The Fountainhead is worthy – this, I guess, can only be verified by reading the work yourself. In the end, scenes of wickedness are not enough to invalidate a book. As Leonardi Bruni argued, the Bible contains scenes that are “wicked, obscene, and disgusting, yet do we say that the Bible is not therefore to be read?” (Gamble 341).
I don’t contest that The Fountainhead is bent, rather, I think that bent books should still be read, as long as they have redeemable qualities. However, if you want to read more from The Fountainhead without reading the ‘bent’ parts, check out my notes on Google Drive.
Rand, Ayn. The Fountainhead. Centennial Edition. New York, NY: Signet, 1993. Print.
Levinas, Emmanuel. Totality and Infinity. Pittsburgh, PA: Duquesne University Press, 1969. Print.
Taylor, Daniel. The Healing Power of Stories. New York, NY: Doubleday, 1996. Print.
Gamble, Richard. The Great Tradition: Classic Readings on What It Means to Be an Educated Human Being. Wilmington, DE: ISI Books, 2007. Print.
Wiesel, Ellie. Night. New York, NY: Hill & Wang, 1960. Print.
* The book is not explicit in this scene. It only implies that the rape occurs, and does not describe it.