Woke and Woker: The Shared Thought Processes of Conspiracy Theory and Critical Theory

Wake up, sheeple!

The least productive hobby I’ve ever had was arguing with Conspiracy Theorists in the YouTube comments – particularly the ones claiming that the Jews run the world and might also be reptoid space aliens. For a long time, I couldn’t figure out why I wasn’t making any progress. After all, I could find or come up with what I thought was an extensive, logical, and verifiable response to any of their silly arguments. My responses fell on deaf ears, and as a result I eventually swore off all comment section arguments.

Maybe my writing wasn’t as brilliant as I thought it was at the time, but I had no way of knowing that because they never actually responded to my arguments. Usually they either switched to another similarly silly argument or called me a “shill” for whatever company or organization they said was behind the conspiracy. Sometimes they insisted I was Jewish, even when I told them that I unfortunately did not have that honor.

The common catchphrase among the hard-core Conspiracy Theorists was “wake up, people/sheep/sheeple!” or “open your eyes!” There are invisible structures of power and hierarchy that determine how the world really works, you just have to wake up to see them. That’s why hard-core conspiracy theorists sometimes were called and even called themselves “woke,” before the term became more well-known to refer to the newly ascendant critical theory-based movement of the left. And despite their opposition – a result of their cultural differences – woke critical theory has a lot in common with woke conspiracy theory in their epistemology – how they think about knowledge.

There are many theories, accusations, and suggestions that a conspiracy may exist, but a Conspiracy Theory worldview – which I emphasize with a capital C and capital T to set it apart from an ordinary accusation of a crime that involves multiple conspirators – goes beyond that: it interprets every important event or relevant piece of information as constructed by a system of power under an evil conspiratorial group. Any facts or arguments that contradict their theory are interpreted as disinformation from the conspiracy. The person arguing the conspiracy theory is in league with the evil group, and therefore any evidence against their theory is actually evidence that their theory of power is correct. A conspiracy theory, in this sense, is unfalsifiable circular logic, and therefore disconnected from truth and reality. It can’t be disproven, but there are countless reasons to doubt it.

The loosely connected group of dangerous ideas that are overtaking the culture and institutions of the U.S. and Europe, sometimes referred to collectively as “critical theory,” after a part of its academic origins, operates on the same principle, and its advocates are commonly called the “woke,” because they’re supposedly awake to how the world really works, to the networks of power that dominate the world and brainwash all the people who are asleep into disagreeing with their theories.

“Privilege,” most commonly in the form of “white privilege,” “male privilege,” and “cisheteronormative privilege,” is the most widely recognized manifestation of this “network of power” that controls the world. But the term is not widely understood. It does not necessarily mean that one is blessed with relative financial prosperity or in any other way, which is why a white male hobo is “privileged” over Oprah. “Privilege” according to critical theory is about how one person’s way of thinking is privileged, meaning that it controls how both whites and those who have been brainwashed by white colonialism think. There is a white way of thinking that controls the world like the Illuminati.

If you argue against this, that’s evidence of your privilege, and proof of how far we have yet to go to achieve “epistemic justice.” Arguments against white privilege are proof that the arguer is infected by white privilege, and therefore evidence of the existence and dominance of white privilege. Those who believe this are “woke,” because they’re supposedly awake to how the world really works, to the networks of power that dominate the world and brainwash all the people who are asleep into disagreeing with their theories. Once again, we have unfalsifiable circular logic.

New Yorker cartoon by Ben Schwartz. Critical theory is sometimes opaque even to mainstream liberals who are expected to know the language.

There are many people, possibly a majority of Americans, who casually accept the worldview of either Conspiracy Theory or critical theory but haven’t skeptically investigated that worldview’s core or thought about its radical impacts. Such people genuinely believe that these radical worldviews are simple and common-sense assertions: that we should distrust those in power and that we should treat people kindly.

The hardcore activists of each group tend to retreat to one of these moderate positions when someone fights back against core premise of their radical worldview. Hardcore Conspiracy Theorists conflate something perfectly obvious: that bad people sometimes work together, with their theory as a whole: that some evil group rules the world and controls everything. Hardcore critical theorists conflate something perfectly obvious: we should oppose racism and treat everyone with kindness, with their theory as a whole: that all our identities and knowledge are a function of our position relative to oppressive power structures.

Important debates often are derailed in the arguing against terminology phase before they can ever make any progress toward the truth. Sometimes, that’s by design. Some conspiracy theorists will insist “it’s not conspiracy theory, it’s a conspiracy fact,” and claim that the term “conspiracy theory” was invented by the FBI or CIA to discredit those who had learned “the truth.”

There are lots of different names for critical theory/”wokeism,” all of which are “problematic” for some reason or another. It’s not critical race theory, that’s an “academic analytical tool.” It’s not Cultural Marxism because apparently that’s just an “anti-Semitic conspiracy theory.” What makes it an anti-Semitic conspiracy theory? The anti-Semitic conspiracy theorists sometimes use the phrase, therefore, it doesn’t exist. (For more about the context in which Cultural Marxism naturally exists, see my essay “Autonomy, Power, and the Possible: A Brief Intellectual History“). “Identity politics” is a useful term, but it can refer to political demagoguery on the basis on any identity, while only certain identities are allowed to be elevated in critical theory. The critical theory woke have sometimes been self-identified as Social Justice Warriors or SJW’s, though now it is apparently “unpersoning” to call them that.

I also would prefer eliminating use of the word “theory” in “conspiracy theory,” and in “critical theory,” as the word implies more intellectual rigor in these subjects than actually exists. If it were up to me, we’d call them “conspiracy guessing” and “critical racism.” But if we are ever to discuss a topic, we have to use words as commonly understood and endeavor to clarify when they are potentially ambiguous, and not change the words of meanings to sabotage the possibility of good-faith discussion.

The woke of both sides are adopting an age old understanding of rhetoric that is sometimes held up as a principle of the critical theory approach to knowledge – that terminology can bypass logic, manipulating ethos and pathos. Antifa can’t be fascist, is has anti-fascist right in the name!

Since the days of the ancient Greek Sophists and probably long before, humans have known the importance of controlling the terminology in controlling an argument. For those thinkers who believed there is an underlying reality that humans can access, or that there are universal laws of math and logic, arguments had to be classified in order to separate logic from the other stuff, Aristotle therefore distinguished between the three modes of persuasion: ethos, pathos, and logos. Ethos is an appeal to the authority of the arguer and their sources, pathos is an appeal to the emotions of the listener, and logos is the appeal to logic and empirical data – or an attempt to fabricate or confuse it.

The most skilled rhetoricians have always known that ethos and pathos are the most effective ways to influence people, and are maximally effective when disguised as logos.

Ethos works in two ways, we can claim that something is good because Reverend King said it, or we can claim that something must be wrong because Hitler supported it – like neoclassical architecture, vegetarianism, Wagnerian Opera, or motherhood. Some of the most common arguments we encounter on an everyday basis are in the form of a negative ethos, the thought process that says cultural Marxism doesn’t exist because the anti-Semitic conspiracy theorists say it does. Some of the “logical fallacies” you may remember if you’ve taken a writing class are ways to categorize illogical uses of ethos: appeal to authority, poisoning the well, genetic fallacy, ad hominem, etc.

The use of ethos has changed as our perception of what constitutes authority has been subverted by the common contemporary mindset. Among conspiracy theorists the sources we would traditionally regard as authorities – scientists, seasoned professionals, articulate thinkers – are regarded as less than worthless. Expertise is a marker of involvement in the conspiracy, and logical, well-reasoned, and evidence-based arguments are sometimes rejected by conspiracy theorists on the basis of the arguer’s expertise. As one flat-Earther told me, “It says the same thing on NASA’s website, so I know it’s fake.”

To reject all arguments from ethos in favor of investigating all claims logically/empirically is the ideal, though it is difficult to actually practice. Conspiracy theorists are notoriously uncritical about their sources (for about a dozen concentrated examples, see my essay “Fact or Famine”) if they come from someone who they already agree with.

Woke critical theory gives an academic gloss to that same age-old mental bias that underlies ethos. Expertise is similarly rejected because of its “problematic history” of “epistemic violence” against marginalized voices. The enlightenment call to go “back to the sources” for evidence is replaced with the call to “elevate colonized/disabled/noncisconforming/fat/etc voices.”

Like conspiracy theorists, they judge arguments not on their merits, but on the hidden agenda the arguer is assumed to be perpetuating. As Alison Bailey, Director of the Women’s and Gender Studies Program at Illinois State says, “critical pedagogy regards the claims that students make in response to social-justice issues not as propositions to be assessed for their truth value, but as expressions of power that function to re-inscribe and perpetuate social inequalities.” This is called “Privilege-Preserving Epistemic Pushback,” and people of any race are guilty of it if they disagree with critical theory.
(Alison Bailey, “Tracking Privilege-Preserving Epistemic Pushback in Feminist and Critical Race Philosophy Classes,” Hypatica 32, no. 4 (2007), 882.)

Within the critical theory epistemological framework, assertions are no longer about facts or reasoning, they’re about identity. The most important phrase in postmodern rhetoric is “as a.” “As a person of color, as a parent of a disabled person, as a member of the LQBTQIADF community, I am uniquely and exclusively entitled to a point of view on this subject.”

Your identity, of course, gives you your own perspective, but not necessarily your own truth and certainly not your own facts. Even if it were the case that one’s perspective gave them their own truth, it would not follow that their truth is the truth for everyone else, and those outside their perspective but somehow inside their truth can only listen. In critical theory, perspective is a function of narrative, and perspective is the foundation of identity. Because each person’s identity is produced by their perspective, disagreeing – or even failing to actively agree – with their perspective is “denying their personhood.”

These conceptual similarities among the woke explain certain practical similarities that you may have observed in either critical theory or conspiracy theory. For example, when the woke do use evidence, anecdotes are always better than data. Even though black or African Americans are ten times more likely to be killed by someone who shares their skin color than by a white person, activists tell us that they should fear for their lives because of the few videos in which a black suspect is killed by a white cop. Even though repeated epidemiologic studies have not found any association between the MMR vaccination and autism, we’ve all heard that someone we know has a cousin whose kid was diagnosed with autism after receiving a vaccine.

The woke assert a claim to secret knowledge, to have taken the metaphorical “red pill” and to see the invisible power structures of the world and who really controls. It’s like having a claim to magic powers. Yet they often treat those who disagree with them not as merely uninitiated, but as agents of evil. The mainstream cultural power belongs to the critical theory faction, and they are constantly asserting that power against those who somehow commit a thoughtcrime against their worldview. Though Conspiracy Theorists don’t have the same cultural power, they do have a certain influence on those who have them in their audience. Writing this, I know I’ve probably already made a lot of Conspiracy Theorists very angry, and I’m risking accusations of being an agent of Illuminati disinformation. But I hope those who have stuck with me will appreciate my candor in talking about the issue directly rather than patronizingly playing along with ideas I disagree with just to avoid offending a potential audience.

The psychology of hard-core conspiracy theorists is complicated and the psychology of the hard-core critical theory woke is mostly unexplored. Exploring the psychology of the arguer, of course, doesn’t discredit their arguments, but it can be useful in understanding their worldview. Woke theories on both sides allow those who believe them to blame their problems or the complicated issues they see in the world on evil forces like cisheteronormativity, the patriarchy, the Rothschilds, Whiteness, the Illuminati, systemic racism, or the Jews.

Wokeism can act both as a quirk of individual psychology and within a larger community. Communities like this thrive on groupthink and mob psychology, to their adherents constantly fired up. Detecting systemic racism in unlikely spots is a badge of honor for critical theory adherents, just as detecting a conspiracy in ordinary events establishes credibility among conspiracy theorists.

The woke critical theorists also share a feature with conspiracy theorists in that they can seem harmless and goofy most of the time, but have the potential to be dangerous when given power. Power is always dangerous, but responsible people may be humbled by the complexity of the world and difficulty of their job and might exercise some restraint. The woke, however, think that they know how the world works and who they need to destroy to reach utopia. This mentality drove both Adolf Hitler and Pol Pot, as they strove to “free their people” from the people their theories deemed to be oppressors.

Poster from the 1941 “Anti-Masonic” Exhibition in German-occupied Serbia. Approximately 11,000 of the 12,500 Jews in Serbia were murdered during the occupation.

There are, of course, prominent differences between Conspiracy Theory and critical theory. For example, because it developed in plain sight on the internet rather than tucked away in the academy, Conspiracy Theory is spoken about in mostly plain English. This makes it easier to try to talk about, while the strange and nebulous language of critical theory makes it very difficult to identify their circular logic.

If you try to argue logically or with data against the woke, they will typically tell you to “educate yourself” by watching really long conspiracy video or reading a dozen articles on Slate or Salon. The difference is we rarely see celebrities issue groveling apologies to Conspiracy Theorists and assurances that now they have “educated” themselves to the harm their words have done. In terms of culture and popular acceptance in different groups, Conspiracy Theory and critical theory are far away.

So can you be awake without being woke? You can distrust or oppose the government without believing every accusation levied against them just as you can oppose racism without believing every accusation of racism. But that means taking upon yourself the task of skeptically evaluating evidence for yourself. If that sounds exhausting, it is.

In a classical Persian poem, an unjust king asks a holy man, “what worship is greater than prayer?” The holy man says, “for you to remain asleep till the midday, that for this one interval you may not afflict mankind.” (Gulistan, Tale XII). If “wokeness” is to afflict mankind, then it might be better to go back to sleep.



Get more privilege-preserving epistemic pushback and Illuminati disinformation delivered directly to your inbox!

Faith or Famine?

A Response to Helena Kleinlein’s Presentation
“Feast or Famine? The Coming Food Shortages”

(Note: This post is responding to a presentation which is available at YouTube via the link https://www.youtube.com/watch?v=wu1uU5cL1D0&t=601s. I’ve included timestamps in brackets [ ] to indicate which area in the video my subsequent paragraphs are referring to. I’ve included sources as links in parenthesis at the end of paragraphs to make it simple for the reader to check the sources independently.)

I have to preface this article by saying that I am not arguing against preparedness, any more than someone arguing against the need for living in constant dread of being struck by lightning is therefore arguing against the need for health and life insurance. I believe that the modern and ancient revelations warn us to be prepared spiritually and temporally for a wide range of potential disasters and that most of us, including myself, could and should do better in that regard. But Helena Kleinlien’s presentation uses preparedness as a jumping-off point for spreading all kinds of falsehoods. It goes from beautiful scriptures like Isaiah 41:13 to malicious accusations and blatantly dishonest pseudoscience.

To be clear, I do not think Kleinlein is being deliberately dishonest, I think she is uncritically repeating any and every argument on the internet that she thinks supports her conclusion. Her conclusion that we should have food storage as part of our emergency preparedness measures is correct, but in in her zeal to demonstrate the need for preparedness she departs from the words of the prophets and turns to the words of internet hoaxers about ten minutes into the presentation. This is what I’ll be arguing against, and hopefully in doing so I can cut away all the nonsense and untrue conspiracy theories while leaving the core of truth intact.

[10:00] I hate having to start with climate because I feel like it’s a topic most of us, including myself, have gotten bored with. But it’s important to bear with me, because this is the point at which either Kleinlein or her source begins to get tricky. Based on what she says and on that citation at the bottom of those two slides, NASA appears to be saying that we are heading into a “grand solar minimum.” They actually say the exact opposite. Scientists, including those at NASA, were telling us we’re moving into another solar minimum, but not into another “grand solar minimum,” which are two different things. The solar minimum is the lowest point the sun’s 11-year cycle. The solar minimum they said was coming has now occurred, taking place in 2020 and into 2021, which may partially explain some of the unusually cold weather in certain areas last winter.
(https://climate.nasa.gov/blog/2953/there-is-no-impending-mini-ice-age/)

The “grand solar minimum” is not as predictable as the typical solar minimum. This chart from NASA gives a good idea of what a grand solar minimum is vs. a regular solar minimum caused by the sun’s cycle. The low points that happened every 11 years are just solar minimums, while the minimum that remained low for about 50 years back in the 17th century is the grand solar minimum, or “Maunder Minimum.” It also helps to look at the scale: the low point of the grand solar minimum in the 17th century and the known highest maximum in the 1960s have only a 0.17% difference in total solar iridescence between the two. Looking at this data, there’s nothing to suggest we’re headed for something like the previous grand solar minimum. That small uptick at the end of one of the dips is where we’re currently at right now.
(https://climate.nasa.gov/internal_resources/1994/)

The quote that the “grand solar minimum will have much more impact on the environment than anything we puny humans can do” is not from a scientist, it’s from a political opinion blog. This, of course, does not mean that it can’t be true, but it does mean we need some kind of data or model to back it up, something that American Thinker doesn’t provide. NASA’s GISS General Circulation Model predicts that if we enter a new grand solar minimum in this century it will decrease the Earth’s average surface temperature by about 0.3 degrees Celsius, not taking into account possible warming due to atmospheric effects. All else being equal, this would take global average temperature back to its 1990s level, not into a catastrophic freeze.
(https://ntrs.nasa.gov/api/citations/20020049982/downloads/20020049982.pdf)

[12:15] We are in a drought in 2021, particularly in the southwest US, and some of that is probably attributable to the solar minimum (but not the grand solar minimum, which hasn’t happened yet and may not happen at all any time soon). But decreased solar iridescence can cause a decrease in condensation, which leads to a decrease in later precipitation. There are probably other causes as well, and as with most crises, government mismanagement is usually finding a way to make things worse. But if it is due to this current solar minimum, we can expect conditions to improve next year as solar activity increases, just as thousands of other severe droughts throughout history eventually came to an end.

[13:35] The locusts in East Africa were a problem last year, and are continuing to be so into this year, but it’s a complete lie that they have destroyed “almost 100% of crops in East Africa.” Based on data from Ethiopia and Sudan, the two largest countries in East Africa, cereal production went down in Ethiopia just 4.6% from 2019 to 2020, while cereal production actually went up by 11.8% in Sudan from 2019 to 2020. But there is a reason we’re seeing these locust plagues in the least agriculturally advanced countries in the world: they have less access to pesticides. This has, undoubtedly, affected certain farmers far more than others, and those most affected should be in our prayers.
(http://www.fao.org/giews/countrybrief/country.jsp?code=eth
(http://www.fao.org/giews/countrybrief/country.jsp?code=sdn)

[14:30] The honeybee population is also still a problem, though colony collapse disorder has decreased in recent years and the number of colonies in the US has remained steady since 2006, not decreased by one third, as Kleinlein claims without a citation.   
(https://www.nass.usda.gov/Publications/Highlights/2019/2019_Honey_Bees_StatisticalSummary.pdf)

Kleinlein says that “Experts believe bees are dying for two main reasons… Bee killing pesticides, neonics and The proliferation of GMO plants.”  This is simply not true. The primary killer of bees is the invasive parasitic varroa mite. Bee nutrition is a problem as well, but not due to GMO plants, which provide the same, or in some cases more, nutrition than conventional plants. The primary reason for bee malnutrition is monoculture, which means a smaller variety of nearby crops for bees to harvest nutrition from. Pesticides only accounted for 6.1% of colony stressors in April-June 2020, the most recent quarter for which data is available, while pests and parasites accounted for 54.4%.
(https://downloads.usda.library.cornell.edu/usda-esmis/files/rn301137d/nc5819380/t148g6070/hcny0820.pdf)

Of course, planting flowers or vegetables is good for honeybee nutrition, but there’s no reason they should be from heirloom seeds. If you’re interested in beekeeping as a hobby and/or to help the bee population, that’s great! It sounds both interesting and beneficial for your family and your community. But you should do the research and get the proper equipment, as just keeping a bee house is your yard is likely to attract wasps or hornets, which actually prey on honeybees.

[16:30] According to Drewry, the Global Container Port Throughput Index is at an all time high, meaning that ports are moving more containers than ever before. This suggests that the reason for the current high prices are the surge in worldwide demand as well as backlog from the pandemic and the Suez Canal incident.
(https://www.drewry.co.uk/maritime-research/port-throughput-indices-update/port-throughput-indices)

[16:55] Kleinlein doesn’t provide a source for the claim about cocoa supplies, but all the sources I found say the opposite. The International Cocoa Organization “estimated the 2020/21 world cocoa surplus at 165 thousand tonnes, up from a previous forecast of 102 thousand tonnes.” This is corroborated by the predictions in the futures market, as a continuous contract is down 9.47% YTD (look up the symbol CC00 on any financial database for the most current price).
(http://www.foresightcsi.com/files/Cocoa%20Monthly%20Report.pdf)

[17:25] On the “Global Food Supply” slide, Kleinlein’s source (the Reuters article) doesn’t remotely say what she claims. I even checked the Internet Archive to make sure that the page hasn’t somehow been changed and it hasn’t. Seriously, just look at her source on this one. I don’t think she did. It doesn’t actually mention Brazil or China, but it does say the opposite of what she claims regarding Ukrainian food exports. Only Russia, according to her source, was proposing limiting their grain exports, but they are not “refusing to allow the world market to draw their crops” as Kleinlein claims.
(https://web.archive.org/web/20200329052817/https://www.reuters.com/article/us-health-coronavirus-trade-food-factbox-idUSKBN21D2TU)

Chinese food exports over time. 2021 is projected.

As the largest country in the world with an economy that largely focuses on exports of manufactured goods, China is naturally a large food importer, the second largest in the world after the US. But they are also an exporter of foods, primarily of vegetables, and that has not changed as a result of the pandemic. Contrary to what Kleinlein says about China “not exporting food at all,” Chinese food exports actually increased in 2020 and are on track to increase again in 2021.
(https://ihsmarkit.com/research-analysis/agrifood-exports-of-china.html)

[17:40] Here’s another set of extraordinary claims with absolutely no sources. It’s true that the US is the world’s largest food exporter because the US the most efficient food producer in the world, producing similar quantities to India and China despite having a fraction of their population.
(https://www.investopedia.com/financial-edge/0712/top-agricultural-producing-countries.aspx)

Neither the US nor the rest of the world is experiencing agricultural decline. Based on the most recent USDA Census of Agriculture, food output increased an average of 1.18% per year during the period from 2007-2017, actually accelerating an unbroken trend going back to the end of the Great Depression. During the same period, the amount of land needed for agriculture decreased by an average of 0.23% per year. Worldwide, Primary Crop Production increased more than 50% from 2000-2018 based on the most recent FAO report. The most recent preliminary USDA data suggests that that this trend is continuing in the US as of June 2021.
(https://ers.usda.gov/data-products/agricultural-productivity-in-the-us)
(http://www.fao.org/3/cb1329en/CB1329EN.pdf)
(https://downloads.usda.library.cornell.edu/usda-esmis/files/tm70mv177/1544ck71q/0c484d82x/crop0621.pdf)

[18:40] It’s correct that there’s no nationalized food stockpile (though there are various food reserves held by FEMA, the DoD, and the National Guard), but the Church has never told us that we should expect the government to take care of our food in times of feast or famine. Food storage is a family responsibility, to be supplemented by the community via the Bishops as necessary. Even outside the Church, any free people should consider food to be the responsibility of each family, with neighbors and charitable organizations like the Red Cross there to help those in emergency need. But emergency preparedness was never the purpose of the Commodity Credit Corporation Inventory (the so-called “emergency food pantry”). The CCC was created by Roosevelt as part of the New Deal to buy and sell agricultural products to raise or lower prices. Like the Federal Reserve, it’s not an emergency reserve, it’s a tool for the Feds to fix prices based on their political concerns.

[18:55] It is a bad year for spring wheat and a lot of other crops because of the drought. But Kleinlein misreads this graph, as the purple line on the bottom is the 2017 spring wheat season. 2021 is just that blue dot on the left, because the 2021 spring wheat season just began. This chart measures growing conditions, not harvest yields, and the states where spring wheat is grown are all in Severe (D2) to Exceptional (D4) drought conditions this year. 2017 was also a drought year in the Northwest, which is why we see a low line that year, too. (https://droughtmonitor.unl.edu/)

Even in a drought year, growing conditions vary by crop and region, as they do every year, which is why we prepare for droughts with crop insurance to mitigate financial risk among our farmers, a network of imports from other regions to keep food available, and personal food storage as a last resort or just to help insure against price increases and panic buying.

[20:45] There are a lot of potentially disastrous government schemes to combat climate change, though the adoption of alternative meat/protein sources is not a threat to our food supply. If “lab grown meat” becomes the norm it will mean an additional potential food source that isn’t reliant on large amounts of land or at risk to diseases and pests. It’s unlikely these will be adopted anytime soon as the developing world is still reliant on meat. But if the political left attempts to ban or impose a sin tax against meat, which they might try in five to fifteen years, anyone with an inclination toward liberty should fight against it.

[25:49] Hydroponically-grown plants have similar and sometimes better nutrition than field-grown plants, as the grower is able to add the precise amount of each nutrient a plant needs. But the IBM Food Trust technology that article was about has nothing to do with hydroponics. It’s to tell the store and the end consumer when, where, and how their food was produced. So if someone believes that hydroponically-grown food is less nutritious than field-grown or vise-versa, the IBM Food Trust software will be able to tell them whether it was grown in a field or hydroponically in a greenhouse.
(https://extension.psu.edu/hydroponics-systems-and-principles-of-plant-nutrition-essential-nutrients-function-deficiency-and-excess)
(https://www.ibm.com/blockchain/resources/7-benefits-ibm-food-trust/)

[26:00] Yes, you should grow what your family likes to eat. But the only way to control the nutrients in the soil is chemistry. And modern farmers use chemistry to measure the levels of nutrients in their soil and use fertilizers to supplement nutrient levels when needed.

[27:30] The most serious threats to human life and prosperity come from the government. The most serious famines of the 20th century were not caused by natural factors, but by the Stalinist and Maoist regimes. Even in the US, the Supreme Court has ruled in Wickard v Filburn (1941) that Congress can ban or limit certain farmers from growing their own crops to feed their own animals.

The Holodomor was a government-imposed famine created by the Stalinist Soviet regime against the citizens of Ukraine. Food was seized as a part of the communist collectivization efforts, leading to the deaths of around four million Ukrainians and five million others under the Soviet thumb in 1932 and 1933. The Ukrainians who had stored food were called “Kulaks,” and often deported or killed by the Stalinist enforcers. That’s why I advocate including a few rifles as part of any preparedness plan. The mindset of collectivization and the tearing-down of those perceived as “hoarders” is an inherent part of socialist thinking, which is why we should always be on guard against this mindset. These are the stakes in the war of ideas against Marxism. But this is not what the “30 by 30” scheme is all about, and when we cry wolf in cases like this, we undermine the position of freedom and reason.

Federal land management in the western United States. Detail of graphic from from the Congressional Research Service (https://fas.org/sgp/crs/misc/R42346.pdf)

Increasing protected lands as part of this scheme could be very economically detrimental, but not a significant threat to cropland, which constitutes about 17% of land use in the US. Most of that would come out of Forest-use land, 28% of US land use, and rangeland, 29% of US land use. That’s what makes this plan especially ridiculous, it will replace actively managed forest with wild forest and call that conservation. It’s true that Nebraska is over 97 percent privately owned. But it happens to be the state with the 6th smallest amount of Federal land in the country. While in Nebraska and Maine Federal land is only 1.1% of total area, in Nevada, Utah, and Idaho it’s 84.9%, 64.9%, and 61.6%, respectively. In total, Federal lands already constitute about 28% of total area of the US, though some of that area is used as rangeland and forest.
(https://www.ers.usda.gov/webdocs/DataFiles/52096/Summary_Table_1_major_uses_of_land_by_region_and_state_2012.xls?v=8340)
(https://fas.org/sgp/crs/misc/R42346.pdf)

[29:00] None of Kleinlein’s gross errors should discount the need to be prepared for disaster, even if it’s not an “end of the world” type disaster. Food storage, water filtration systems, power self-sufficiency, and evacuation preparedness measures could make all the difference in an event like the recent Texas blackouts, a natural disaster, a particularly bad series of droughts, or a pandemic so severe it requires a total lockdown and cuts off the food supply. Preparedness is a matter of insurance, of hoping for the best but readying for the worst, and it’s also a matter of following revelation from and trusting in the Lord.

[30:10] Kissinger is a really bad guy, but there’s no evidence he actually said this. Kissinger is a Machiavellian, he sees everything in terms of power and manipulation, but this picture/quote is from a conspiracy theory blog. Various versions of this same quote have circulated among conspiracy theorists for years and has been attributed to people and groups ranging from the Freemasons to secret Nazis to the Jewish banking conspiracy.

[31:22] The World Economic Forum advocates some terrible policies, but that doesn’t mean they’re part of a secret combination. It’s also important to understand what they actually are: they’re not a World Government or a secret super-bank or anything like that. They’re an advocacy group/think tank that has a meeting of academics and activists in Switzerland every year and then publishes a bunch of articles that range from interesting to stupid to borderline dystopian. And like any center-left group, they see every crisis and world event as an opportunity to “reimagine” the global economy along more “socially democratic” lines.

[31:48] The “Great Reset” says nothing about 8 main planks, this is actually from an article the WEF did back in 2016 of 8 predictions on what the world would be like in 2030. When it says, “a handful of countries will dominate,” it means that “instead of a single force, a handful of countries – the U.S., Russia, China, Germany, India and Japan chief among them – show semi-imperial tendencies.” They were predicting that the U.S. will no longer be the lone dominant world superpower in 2030.
(https://www.weforum.org/agenda/2016/11/8-predictions-for-the-world-in-2030/?utm_content=bufferdda7f&utm_medium=social&utm_source=facebook.com&utm_campaign=buffer)

[32:18] I agree that we don’t need to “transform” our food system, but if our food system is in as much danger and decline as Kleinlein claimed it was in the first half of the video, why does she ask why we need to transform it? Shouldn’t we want to transform it if it is really in decline?

[33:06] The phrase “dominant global entity” appears nowhere in the “Reset the Table” plan as she alleges in her list of points, which have been deceptively modified from the Rockefeller Foundation’s plan by Keinlein or her source. Part 1 of the plan is actually to “Create an integrated nutrition security system” (page 9). They break that down into three specifics: “1. Strengthen nutrition benefit programs to ensure children and families are fed. 2. Invest public and private funding in school food programs as anchors of community feeding. 3. Expand Food is Medicine.” Once again, we have some proposals that are practically and economically dubious but are clearly not advocating collectivizing the food supply under a global secret combination. You can look at the other two parts of the Rockefeller Foundation’s plan (Pages 12 and 14) to see more examples of how Kleinlein or her source is grossly misrepresenting the admittedly lousy plan. But like the WEF, the Rockefeller Foundation is a think tank, which means that, like most think tanks, they regularly issue extreme recommendations that are mostly ignored by political bodies.
(https://www.rockefellerfoundation.org/wp-content/uploads/2020/07/RF-Reset-the-Table-FULL-PAPER_July-28_FINAL.pdf)

Image of the Tweet in Kleinlein’s presentation. I’ve added highlighting to show the peripheral parts of the tweet she missed.

[33:15] This slide even says the date of the tweet at the bottom of the image. It’s in 2016, 4 years before the WEF’s “Great Reset” plan or the pandemic that prompted it. It’s another one of the 8 predictions they made back in 2016, saying that “all products will have become services” by 2030. This is prediction of a future where, for example, people will no longer own cars, they’ll rent self-driving cars for short trips. This kind of prediction is extrapolating on the trend of things we used to purchase becoming subscriptions. For example, we used to buy movies, now we subscribe to Netflix or Disney+. Even now, weird subscription schemes are popping up for everything from clothes to meals. They’re predicting that trend will expand to almost everything by 2030. While it doesn’t seem like a very likely prediction, it’s probably more likely than the WEF announcing their evil plan to take over the government and ban all property in 2030.

[34:00] The Federal Reserve has been the most destructive force in our economic life for the last hundred years, but a “Federal Reserve Funded Financial Institution” is not a special thing, all banks in the US can be funded by the Federal Reserve. When the Federal Reserve issues money, they do it by purchasing Corporate Bond ETFs, meaning essentially that a corporation is getting a loan from the Federal Reserve Bank. BlackRock has been selling those bonds to the Federal Reserve using the money from those bond purchases to purchase land and homes, which they have an advantage in doing because of the low interest rates at which the Fed purchases their bonds.

It’s important to note that BlackRock is an asset manager. This means that the $9 Trillion dollars they manage is not their money, it’s money they manage for their clients, often in the forms of ordinary people’s IRA and 401k accounts. They also process transactions when the Fed buys other ETFs, like those of Vanguard or Fidelity. It doesn’t require any corruption on BlackRock or Vanguard’s part to hurt the economy, it’s the naturally destructive result of the Federal Reserve holding interest rates artificially low during economic expansion.

[34:42] Turner has owned huge amounts of land for a few decades at this point, and this is probably increasing as investors worry about inflation. Gates’s 242,000 acres is a lot, but it’s a tiny fraction – less than 0.062% – of US farmland.  This is like the claim of socialists that one day a few evil billionaires will control everything. In this case, the socialists actually might be slightly closer to reality. Gates’s net worth is as high as 0.19% of all capital stock in the US. But still not even close.
(https://www.ers.usda.gov/webdocs/DataFiles/52096/Summary_Table_1_major_uses_of_land_by_region_and_state_2012.xls?v=8340)
(https://fred.stlouisfed.org/series/RKNANPUSA666NRUG)

[35:40] What is a bigger threat to our liberty? Is it Bill Gates buying farmland, or is the atrocious ideas that are overtaking our culture? Did Moroni see Ted Turner owning ranches in Montana, or did he see the new system of thinking, the “critical theory” of anti-rationality, racicalized thinking, and rejection of facts that is indoctrinated in the schools, enforced in intellectual circles, and shouted from the mountaintops of social media?

[36:35] As I demonstrated earlier, food production is not being reduced, it is growing (Note that the “Armstrong Economics” she quotes is not an economics journal, but a conspiracy theory blog). This means the time to stockpile is now. But there’s no “we” about it. It’s not the “global decision makers” job to stockpile for you. As long as there is any semblance of freedom in this world, it’s up to individuals and families to stockpile for themselves and for churches and charities to stockpile for expected shortfalls. “Zion shall escape if she observe to do all things whatsoever I have commanded her.” But not if she expects the federal government and “global leaders” to do it for her.

After all this, Kleinlein gets into the section about food and water preparedness, which I’m certainly not going to argue with. She leaves out fuel and power, which the recent Texas blackouts showed might be the thing we need most, but that’s fair enough as power preparedness requires a large initial investment and involves a lot of technical details. Personal water filtration should be a part of every preparedness kit and kept in every car as well, but I’ll defer to her expertise on food supplies.

Faith has no room for fear. The faith of a Saint is not in how bleak the future is and how terrible and ignorant the rest of the world is compared to us. We should not take comfort in self-satisfied claims to secret knowledge, but in the comforter, the Holy Ghost. Our faith is not in our ability to “crack the code,” “see the signs,” or “connect the dots,” our faith is in Jesus Christ.


I welcome any corrections or clarifications on any of these points, and given the amount of sources I’ve looked through and back-of-the-envelope calculations I’ve had to do in writing this response, I am open to the possibility of my own errors. I’m less open to accusations that I am in cahoots with BlackRock, the Illuminati, the Rothschilds, the World Economic Forum, the Israelis, the Gates Foundation, or the Brady Bunch. I received an important insight from my sister while writing this article, which I thank her for and use with her permission, though I cannot confirm or deny her affiliation with the Illuminati.
You can email me at andrew@libertysaints.com


Top image: The collectivization of food by Soviet forces as part of the Holodomor, 1933.

Hume and True Skepticism: How Do We Know?

“Custom is the great guide of human life,” according to David Hume.1 The reasoning of rational thinkers and philosophers at work is a completely different way of thinking than the everyday common sense by which most people live their lives. If we as critical thinkers were to apply our skepticism to our everyday lives, they simply wouldn’t be able to function. We would be constantly walking directly into walls because we can’t be certain that the wall is in fact in front of them and that it constitutes a barrier to their progress and that running into a wall is likely to cause them to break our nose. We would be like Pyrrho, the ancient philosopher who was allegedly so skeptical that had to be stopped by his friends from walking out into the middle of traffic because he had no rational basis for proving it was dangerous to do so. Must the skeptical reason of the rational thinker and the customary common sense of the everyman be so completely divorced from each other?

Hume’s famous example of the inability of reason to justify the principles by which we customarily live our lives is in the principle of cause and effect, demonstrated with the relationship of billiard balls on a table. When we think we are observing one ball hitting another, thereby causing the second to move, we are in fact only observing the movement of one billiard ball followed by the movement of another billiard ball. The “law of cause-and-effect” is not something we see, we only see one thing happening, and then another thing happening immediately after. The cause-and-effect relationship is added by mind via the process of induction, but it is not contained in the observation of the billiard balls.

If the process of induction is not based on reason, where does it fall? According to Hume, “All reasonings may be divided into two kinds, namely, demonstrative reasoning, or that concerning relations of ideas, and moral reasoning, or that concerning matter of fact and existence.”2 Induction cannot be demonstrative reasoning, because it deals not just with relations of ideas, but with matters of fact. It cannot be moral reasoning, because it attempts to apply a universal idea to matters of fact.

This skepticism even applies to what Hume calls the “Uniformity Principle.” The Uniformity Principle is the common-sense idea that the world is uniform and subject to simple predictions. The example he gives is that we assume that eating bread will be nourishing (it provides calories) because it was nourishing every other time we ate it. We assume that the sun will come up tomorrow because it has come up every day for the duration of recorded history. This, Hume says, cannot be proven, and in fact is based on circular logic.

Painting of Hume by Allan Ramsay. Despite my extensive research, I have yet to find out why he’s wearing a shower cap.

Induction is the logical process by which we assume things like because the sun rose every day of our life, it will rise tomorrow. It is a probabilistic argument, and a practical one. But it’s also a circular argument, according to Hume. How do we know that induction is correct? Because it works. And how do we know it works? We know it inductively, because it worked every other time we used it. This is circular logic.

The principle of cause and effect, according to Hume, is a connection “that accompanies the imagination’s habitual move from observing one event to expecting another of the kind that usually follows it. That’s all there is to it.  Study the topic from all angles; you will never find any other origin for that idea.”3(emphasis added)

But why should we think of ideas in terms of their origins? If we cannot prove the existence of the cause-and-effect relationship in the material realm, why would we evaluate ideas on the strength of their causes? Hume’s basic epistemological framework evaluates things on the basis of their origins. Is this really how people think? More importantly, is this really how people should think? Is this the way of thinking most likely to identify truth or pragmatic value?

Any argument against an epistemological framework – a way of thinking about how we know what we know – must take the form of a new epistemological framework, capable of evaluating claims, evidence, postulates, etc. with more knowledge-finding ability and less risk of contradiction than the previous system. Kant, for example, created a system in which judgments of cause and effect or the Uniformity Principle are possible as “synthetic a priori” judgements. Hegel created a system in which cause-and-effect are part of a back-and-forth dialectic.

As an epistemological starting point, lets take a principle of the scientific method: it is not possible to prove a theory, only to try and fail to reject it. Science is a dialectical process, in which a hypothesis constitutes the thesis and the tests or experiments or challenges we put it through are the antithesis. In this scientific framework, the antithesis constitutes an attempt to negate the thesis.

The dialectical method associated with G.W.F. Hegel has a reputation as one of the most difficult concepts in philosophy, owing to his infamously jargon-laden texts. But simply put, the dialectic is the set of relationships and changes in relationships that constitute any kind of progress. The simplest application of the dialectic is in the realm of intellectual progress: first the intellectual community has an idea, which in Hegelian thought is called the “thesis.” Then others in the intellectual community, seeing the flaws with the mainstream thesis, come up with their own idea, called the “antithesis.” The thesis and antithesis remain in conflict until they can be “subsumed” into a system that incorporates both of them, which is called the “synthesis.” The synthesis becomes a new thesis and the process repeats again, continuously improving the state of knowledge.

The scientific method is not precisely the same as Hegel’s formulation of the dialectic, in which the thesis and antithesis become part of the synthesis. The scientific method is a dialectic with a binary fork; that is, every antithesis either causes the rejection of the thesis or fails to cause the rejection of the thesis. When a test or challenge fails to cause the thesis to be rejected, that test serves to strengthen, not to negate or transform, the thesis, because the thesis has shown that it can survive the test. If our test or challenge shows that the thesis causes contradiction, then we must reject it. We do not simply subsume the rejected hypothesis into the synthesis, we formulate a new hypothesis to replace it. Of course, it is possible that the new hypothesis is a similar but modified version to the hypothesis, but it sometimes is the case that the new hypothesis is radically different, or else evaluates things in terms of an entirely different framework.

The fundamental process behind the scientific method is dialectical testing for contradiction.

But the scientific method is constrained and focused by means of experimentation and controlled observation, and therefore applies only to problems with hypothetical answers that can be tested via experimentation. The general process of dialectical testing for contradiction, however, can be applied to all knowledge. Testing for contradictions is, in fact, the typical way by which we skeptically evaluate any proposition.

There is no proof in science, just testing for contradiction. In applying the principle of testing for contradiction to the rest of the knowledge, we must consider a radical proposition:

Proof does not exist as a means of attaining knowledge.      

We naturally have to clarify this claim. Proof can exist in the colloquial sense; in which whatever idea we try but fail to reject is thereby “proven.” Furthermore, a “deductive proof” might exist as process of analyzing statements already proven, but since a deductive proof’s premises are always unproven, a deductive proof is simply a mental exercise.

As an example of this, take the most common and basic example of a deductive proof: Socrates is a man, all men are mortal, therefore Socrates is mortal. Deductive logic may say that if Socrates is a man, and if all men are mortal, then Socrates is mortal. We have a “valid” proof, but we do not have proof. How do we prove that all men are mortal? We might say that every man we can find evidence of is mortal, and thereby induce that all men are mortal, but as Hume taught us, we cannot have inductive proof. If we cannot have inductive proof, then we cannot have a complete deductive proof.

But we wouldn’t say, then, that deduction is impossible, only that a positive proof via deduction is impossible. By the same token, we should say the same thing about induction.

What do deduction and induction give us then? Only a suggestion. When we see the conjunction between the movements of the billiard balls, induction gives us the suggestion of a cause-and-effect relationship. We then evaluate that suggestion with the dialectical testing for contradiction, both by experimenting with tossing billiard balls around the table ourselves, by analogizing it to other cause-and-effect relationships, and by formulating a theory of cause-and-effect that we can evaluate mentally for possible contradiction with other theories. Induction and deduction also serve to reduce suggestions into their constituent parts to make them easier to test.

So what are we doing when we evaluate claims in our minds? Let’s look at what has been claimed as the most basic and undoubtable proof of all: Descartes’s “I think, therefore, I am.” Just as a person cannot prove that they are not in the Matrix or the subject of their own personal “Truman Show” or a brain in a vat receiving all experiences through an electrical wire, Descartes took doubt to its logical conclusion. Descartes, probably as an intellectual exercise, doubted his own existence for a moment, but found a way out with the argument “I think, therefore, I am.”

Rene Descartes, detail of portrait by Frans Hals

Descartes justifies this claim not by defining “think,” “therefore,” and “I am,” which should be all that it takes to evaluate this claim if it were possible to truly prove things via definition. Instead, he attempts to disprove the claim. He challenges the claim by evaluating the possibility that perhaps an “supremely powerful and cunning deceiver” is fooling him into believing that he exists. He then challenges his own challenge, rejecting it by saying that “if he is deceiving me I undoubtedly exist: let him deceive me all he can, he will never bring it about that I am nothing while I think I am something.”4 Descartes didn’t “prove” he exists, he asserted it and tested it for contradiction. If a proof was actually proven, there would be no point in testing possible objections to it.

All “proof” is the survival of a suggestion when tested for contradiction.

Let’s take another extremely basic proof, A=A. How do we prove that A=A? We can’t, we can only disprove its opposite. We can say that if A sometimes does not equal A, then we wouldn’t be able to even talk about it, because without a law of identity we wouldn’t even be able to think or compare ideas. A≠A is so contradictory we can’t even conceptualize it. But we still can’t formulate positive proof of its opposite.

This is how Moroni tells us we should test ideas and principles for truth.

“4 And when ye shall receive these things, I would exhort you that ye would ask God, the Eternal Father, in the name of Christ, if these things are not true; and if ye shall ask with a sincere heart, with real intent, having faith in Christ, he will manifest the truth of it unto you, by the power of the Holy Ghost.”

–Moroni 10:4 (emphasis added)

Notice the use of the phrase “not true.” He is asking us to test if a proposition can be falsified, and it’s by not negating whatever proposition we bring to the Lord that he manifests the truth of it unto us.

This idea is embedded in how we think about evidence and proof, even though we don’t always directly acknowledge it. When we say things like “prove beyond a reasonable doubt” or “prove beyond all doubt” we are defining the proof by the strength of its possible negation. If positive proof were a reality, then the phrase “prove beyond all doubt” would just be “prove.” If custom is the guide of thought, philosophy should not be fundamentally different, the difference should only be that philosophers are willing to be more skeptical. To be skeptical is to search for the underlying principles or constituent parts of an argument and then to evaluate them by this negative dialectic.

Moroni in his natural habitat

Knowledge is always uncertain, meaning that it is subject to perpetual evaluation by the dialectical testing for contradiction. It’s a dialectic between everyday common sense, which takes a useful suggestion as truth, and philosophical skepticism, which asks “if your suggestion were truth, wouldn’t it cause these contradictions? How does your suggestion stand up to these possible tests”? Practicality takes any reasonably well-tested suggestion as truth until further notice.

Hume recognizes that when reason evaluates relations of ideas and matters of fact that it is attempting to find a contradiction or an applied contradiction in the ideas, terms, or evidence. Why then, should we ever deal in justification and proofs, inductive or deductive? We can only search for contradiction. He says “[the proposition] that the sun will not rise tomorrow is just as intelligible as—and no more contradictory than—the proposition that the sun will rise tomorrow” because it can be conceived by the mind easily and clearly.”5

But is evaluating whether a proposition “can be conceived… easily and clearly” really what human reason does, or ought to do, when it searches for contradictions? Descartes used the same concept, which he called a “clear and distinct notion,” in his proofs for the existence of God.6 But what is a “clear and distinct notion”? The notion of clear and distinct notions is a rather unclear and indistinct notion to my mind. Some people may disagree with me on this point, others may agree, but if the notion of clearness and distinctness was clear and distinct, then there shouldn’t be any disagreements.

What, then, are we actually doing when we search for contradictions? We are evaluating the implications of an argument to see if they cause contradictions. We are asking “if this were true, what would be the results?” The if-then evaluation I just made is just this type of argument. We cannot positively say that something does not cause contradictions, just as we cannot say that something is definitely “clear and distinct,” but we can point out contradictions, just as we can point out a lack of clarity and distinctness.

A possible contradiction is a reason to doubt a hypothesis. A doubt need not be proven, it only needs to be suggested, and like any suggestion, the doubt too is subject to the process of dialectical testing for contradiction. President Uchtdorf’s famous advice to “Doubt your Doubts” is a fundamental principle here. Of course, a doubt of a doubt can be doubted, which means that every theory we hold is attached to a “tree of doubts” (or a “tree of evidence,” viewed more optimistically) that we must evaluate along as part of the hypothesis.

The process of dialectical testing for contradiction can be applied to all types of judgements, not just experimental science. Even contemporary ethical reasoning uses this method regularly in the form of thought experiments. The trolley problem, for example, tests the ethical theory of utilitarianism by asking what someone would do when given the option of killing one person to save two. If utilitarianism were really as universal, obvious, common sense, etc., as its proponents believe, then we ought to be willing to push a bystander onto the tracks to prevent the trolley from hitting two others. We ought to be willing to kill one person as an unwilling organ donor for two others if utilitarianism were the obvious mode of ethical thinking.

This epistemology of skeptical doubting and testing for contradiction can be applied broadly across all fields of knowledge. It is, in fact, our natural way of arguing, though we humans have a bad habit of only applying it to that with which we disagree. Metaphysicians used the dialectic of testing for contradiction when they evaluated of necessary truths by the implications of their existence and of their non-existence. Mathematical “proofs” break a down a statement into its constituent parts, each of which are as subject to doubt as the law of identity as shown earlier.

This is why we should “become a seeker,” as Steven C. Harper entreated at his recent BYU devotional. He tells us that at the far side of the complexity of evaluating the thousands of doubts and caveats and pieces of evidence that question a claim is a “simplicity on the other side of complexity.”7 Every tree of doubts has a simple root truth, a simple principle that can guide everyday life without the need to consult every branch of the tree to evaluate whether or not we are actually in the Matrix before deciding whether to cross the street.

At root of this whole project is the problem of evaluating this epistemological system. How do we evaluate epistemological systems? We clearly can’t prove them mathematically, so the natural, obvious, pragmatic, and most logically sound way is to expose epistemological systems to a negative dialectic.

In giving examples of this epistemology in action throughout my argument, I naturally used this principle of testing for negation, mostly unconsciously. When I give examples of how this framework applies in a certain context, I am, in fact, testing this suggested framework to show that it can be applied without contradiction (in addition, of course, I am using examples to try to help the reader understand my thesis).

Hume was right when he said that “this idea of a necessary connection among events arises from a number of similar instances which occur of the constant conjunction of these events.” So what? Tracking ideas to their origin to find their proof is a futile approach to epistemology. It’s an example of the “genetic fallacy,” and if we were to evaluate all ideas by their origin, we would immediately find that they are all based in our imperfect minds. The human mind doesn’t have the power of “proof,” though it has the power to create unproven ideas, or “suggestions,” which range from useful to beautiful to intriguing to absurd to imperfect to evil. It’s up to us, individually and collectively, with the help of our senses, our minds, and revelation, to learn about and skeptically test those ideas.


Note: This essay incorporates some material written for BYU’s History of Philosophy course (PHIL 202) in March of 2021. I’m picking on Hume here because he’s probably the most intellectually influential skeptic who has ever lived, and it’s the brilliant ridiculousness of his famous billiard ball example that sparked the idea that led to my approach to epistemology.


Endnotes:

1. David Hume, Enquiry Concerning Human Understanding (Early Modern Texts, 2017) 21, http://www.earlymoderntexts.com/assets/pdfs/hume1748.pdf.

2. Ibid., 16.

3. Ibid., 37.

4. Rene Descartes, Meditations on First Philosophy (Early Modern Texts, 2017), 4, http://www.earlymoderntexts.com/assets/pdfs/descartes1641.pdf.

5. Enquiry, 11.

6. Meditations, 13-16.

7. Steven C. Harper, “How I Became A Seeker” (BYU Speeches, 8 June 2021) https://speeches.byu.edu/talks/steven-c-harper/how-i-became-a-seeker/.


Top Image: Detail of The Billiard Room by Nicolas Antoine Taunay, circa after 1810

Intelligences, Subjective Phenomenology, and Transcendence

Note: This essay was written for BYU’s Philosophy of Religion course (PHIL 215) in August of 2019. If you should ever have the opportunity to take a class from Brother Roger Cook, I can’t recommend him highly enough. This is probably the densest writing you’ll find here on LibertySaints, but if you can muscle through it, you might find the topic of philosophy of mind as fascinating as I do.

In 1831, Joseph Smith and Sidney Rigdon received a revelation, recorded in the Doctrine and Covenants, concerning the religious experiences and beliefs concerning the Shakers. The Shakers and certain members of The Church of Jesus Christ of Latter-Day Saints who believed some of their teachings believed, among other things, that Jesus Christ had returned in the form of a woman, Ann Lee. The Shakers had been “deceived” by what they believed as revelation.1

Revelation, miracles, the confirmation of the Holy Ghost, and even mystical and visionary experiences are the “rock” on which the Church is built.2 Doctrine and Covenants Section 42 says “If thou shalt ask, thou shalt receive revelation upon revelation, knowledge upon knowledge, that thou mayest know the mysteries and peaceable things…”3 But the Church of Jesus Christ of Latter-Day Saints is not the only sect that believes in and attempts to derive doctrine from revelation. This begs the question: is revelation inconsistent? Why would revelation from God mean different things to different people? And if it is inconsistent, does it remain valid as a source of knowledge? Or even as a real phenomenon?

Late philosopher of religion Louis Pojman argues that “religious experience is amorphous and too varied to yield a conclusion with regard to the existence of God.”4 He compares the religious experiences of western Christians to that of a polytheist in East Africa who receives a vision of the hippopotamus-god to show that religious experience is varied across people and varies with cultural beliefs.

Those who have observed mysticism and revelation from a physiological perspective acknowledge that religious experience is a real mental and neurological phenomenon in some sense and that it may be epistemologically convincing to the person who experiences it for a limited period of time. But the ability of revelation to give us knowledge by which we live our lives remains in doubt for those who make such studies. As William James says in his classic study on The Varieties of Religious Experience:

There are moments of sentimental and mystical experience. . . that carry an enormous sense of inner authority and illumination with them when they come. But they come seldom, and they do not come to everyone; and the rest of life makes either no connection with them, or tends to contradict them more than it confirms them. Some persons follow more the voice of the moment in these cases, some prefer to be guided by the average results. Hence the sad discordance of so many of the spiritual judgments of human beings…5

If revelation seems disconnected from the physical world as measured scientifically or empirically, its value as knowledge is in question. Furthermore, the perspective of the physicalist challenges the very possibility of religious experience having any value outside the neural processes of the brain. The brain in the physicalist model is mechanically reductive, in that it can be completely understood as a complex system of interacting pieces of biological matter, like an electronic calculator or computer on a larger scale. Mathematician Bertrand Russell denies the possibility of any mental existence beyond the matter that can be scientifically observed and analyzed:

The continuity of the human body is a matter of appearance and behavior, not of substance. The same thing applies to the mind. We think and feel and act, but there is not, in addition to thoughts and feelings and actions, a bare entity, the mind or the soul, which does or suffers these occurrences.6

A different conception of the mind is necessary to explain both religious experience and the relationship between the mind, religious experience, and the world. This is where transcendent experience has a place, as it represents the relationship between the divine and the conscious mind. Transcendent experience can be usefully defined here as all mental phenomena that access the transcendent reality, or the reality beyond the scientifically measurable physical world.

The concept of mental phenomena, or “phenomenology,” is necessary to understand the mind in its connection with the transcendent. To explain the processes of the mind or consciousness we do not refer primarily to the anatomy or physiology of the brain, we instead speak of what the consciousness experiences, or “mental phenomena.” Mental phenomena include not only the raw sensations of the senses, but also certain more complex mental events, such as love, learning, or appreciation of art.

Consider the classic question on this subject: what does the color red look like? And does it look the same for all people? This is not referring to the physical nature of visible electromagnetic radiation that can be measured at a wavelength of approximately 625­-740 nanometers at a frequency of 405-480 terahertz. Visible radiation (“light”) is what causes the mental phenomenon of the color red, but it is not the color red itself. It is only when radiation of this type reaches the retina, is processed by the brain, and reaches the conscious mind that the mental phenomenon of seeing the color red actually occurs.

Imagine a man completely blind from birth, who through intensive study managed to become a leading expert on the properties of light and of the neurology of the cerebral cortex where much of the brain’s visual processing occurs. Would this intensive study give him access to the experience of seeing red or any other color? Would one be able to describe the color red to him based on their own experience, calling it “warm” or “like a sunset”?

Tommy Edison, a YouTuber who was born blind, said about this problem, “I don’t have any concept of what [color] is… It doesn’t mean anything to me. Over the years people have tried and tried and tried to explain color to me, and I just don’t understand it…People will try to explain a sense with another sense: ‘It’s like the way this smells, maybe, this is what a particular color is like.’ What?”7 Not just specific colors, but the entire concept of color as a category of sensory phenomenon is beyond comprehension to those who have never experienced it. While it might be possible to teach someone like Tommy Edison how certain objects reflect certain wavelengths of radiation into the eye, the mental phenomenon remains beyond scientific understanding.

Phenomena not precisely explainable by scientific reduction of their parts are aspects of mind that should not exist in a physicalist/reductionist system like that which was proposed by Bertrand Russell. The existence of some aspect of mind that cannot be scientifically explained as a series of interactions between particles and waves, and the fact that this aspect of mind can interact with the observable physical world, is a threat to the reductionist program. The reductionists can hold to the hope of an eventual hypothesis to scientifically explain the process of mental phenomena and keep consciousness out of the equation, but such an explanation seems unlikely. David Chalmers uses the color example to address the possibility, or lack thereof, of scientific experimentation on mental phenomena through neurology, saying:

“…imagine that two of the axes of our three-dimensional color space are switched— the red-green axis is mapped onto the yellow-blue axis, and vice versa. To achieve such an inversion in the actual world, presumably we would need to rewire neural processes in an appropriate way, but as a logical possibility, it seems entirely coherent that experiences could be inverted while physical structure is duplicated exactly. Nothing in the neurophysiology dictates that one sort of processing should be accompanied by red experiences.”8

Colors are just one basic unit of larger mental phenomena in the real sensory world. More complex phenomena, like the cinema or music, combine thousands of mental phenomena to create a new mental phenomenon, making the reducibility of consciousness even more difficult. These phenomena are also tinted by our memories and the peculiarities of our minds. We can see this in the diversity of tastes in mental phenomena, in which stimuli that can be scientifically measured as identical to two people are perceived as completely different as mental phenomena by those people. The exact same stimulus can result in the mental phenomenon of a favorite song to one person and an annoying racket to another.

Beyond these references to phenomena of the senses, there are further indicators of the existence of a non-reducible consciousness. One example is Chalmers’s hypothetical “zombie twin,” a being physically, neurologically, and psychologically identical to Chalmers, but lacking the ability to receive these mental phenomena.9 This “philosophy zombie” would behave identically to a human, though it may be confused if asked questions related to its consciousness and phenomenology. It would not have a consciousness the way other humans do, though that lack would not be immediately apparent based on observation of the zombie, which would walk around and eat and use sight to avoid obstacles and respond to pain. But even in being mechanically identical to a human, there is still a fundamental difference between it and those humans. The difference is that of consciousness, or the feelings of being itself and of experiencing phenomenology.

Thomas Nagel, in his classic paper “What it is Like to Be a Bat?”, pointed out that the phenomenological experience of echolocation, which comes naturally to the mind of a bat, is beyond the understanding of human minds, even those biologists who have studied the process extensively.10 Human-built submarines, which “see” through echolocation in the form of sonar, must translate the mechanical process into something visible on a screen for it to be seen by sailors.

Nagel argues elsewhere that consciousness in a basic sense does not follow from evolution as we understand it. He says in Mind and Cosmos:

“We recognize that evolution has given rise to multiple organisms that have a good, so that things can go well or badly for them, and that in some of those organisms there has appeared the additional capacity to aim consciously at their own good, and ultimately at what is good in itself. From a realist perspective this cannot be merely an accidental side effect of natural selection, and a teleological explanation satisfies this condition. On a teleological account, the existence of value is not an accident, because that is part of the explanation of why there is such a thing as life, with all its possibilities of development and variation. In brief, value is not just an accidental side effect of life; rather, there is life because life is a necessary condition of value. This is a revision of the Darwinian picture rather than an outright denial of it.”11

Nagel is claiming the universe is teleological (created intentionally), but he is not arguing the traditional conception of the universe being created by a deity, rather that consciousness has some part in creating or organizing matter and exists in some kind of symbiotic relationship with it. This consciousness, the mind that experiences the ineffable, non-reducible mental phenomenon, is sometimes referred to as the mind or the soul but is known in Latter-Day Saint thought as “intelligence.” This was revealed to Joseph Smith and recorded in Doctrine and Covenants Section 93: “Man was also in the beginning with God. Intelligence, or the light of truth, was not created or made, neither indeed can be. All truth is independent in that sphere in which God has placed it, to act for itself, as all intelligence also; otherwise there is no existence.”12

The most non-reducible phenomena are in the forms of religious experience; revelation, mysticism, miracles, the feeling of the Holy Ghost, the spiritual use of dowsing and seeing stones, visions, etc. Catholic writer Evelyn Underhill says in her response to William James, “True mysticism is active and practical, not passive and theoretical. It is an organic life-process, a something which the whole self does; not something as to which its intellect holds an opinion.”13

But even without religious experience as typically defined, there is a connection to the divine in human phenomenology. In the Book of Mormon, Alma argues against Korihor, an anti-Christ critic, that “…all things denote there is a God; yea, even the earth, and all things that are upon the face of it, yea, and its motion, yea, and also all the planets which move in their regular form do witness that there is a Supreme Creator.”14 This is typically interpreted as a teleological argument, but with a Latter-Day Saint understanding of the intelligences, the statement that the elements of creation “witness that there is a Supreme Creator” can be taken literally. If so, this witness is based on a transcendent separate from the traditional understanding of religious experience.

This conception of consciousness avoids the problem of “the ghost in the machine,” or an immaterial consciousness or spirit that somehow controls the matter which composes the brain and body. Consciousness, or the intelligence that constitutes it, is composed of what Joseph Smith called “finer matter.” Doctrine and Covenants 131 says, “There is no such thing as immaterial matter. All spirit is matter, but it is more fine or pure, and can only be discerned by purer eyes; We cannot see it; but when our bodies are purified we shall see that it is all matter.”15 Finer matter cannot be directly studied by human instrumentation, nor can the human mind understand it reductively. The part of the mind that receives non-reducible mental phenomena, including transcendence, is composed of this finer matter.

Transcendence in this sense is much broader and includes every sort of mental experience that can “denote there is a God.” These are events that in certain mental conditions can create a feeling of connection with God, despite having no claims of visionary or supernatural phenomena accompanying them; seeing a religious work of art or music is a common source of such experience, as is a walk alone in nature.

Mythologist Joseph Campbell formulates a model of this sort of experience based on a Freudian theory of the subconscious, in which certain phenomenological experiences, particularly the hearing of a story about a “hero’s journey,” are affirmed by the subconscious to the conscious, creating a feeling of transcendence. “The happy ending of the fairy tale, the myth, and the divine comedy of the soul, is to be read, not as a contradiction, but as a transcendence of the universal tragedy of man. The objective world remains what it was, but, because of a shift of emphasis within the subject, is beheld as though transformed.”16

More modern examples of his theory can be seen in certain films that evoke something like a transcendent experience without overtly addressing a religious topic, such as Star Wars (1977), Superman (1978), or The Lord of the Rings (2001-2003). These are movies that depict the archetypical “hero’s journey” Campbell studies. Leading film writer Roger Ebert compares Star Wars to “an out-of-the-body experience at a movie… The movie relies on the strength of pure narrative, in the most basic storytelling form known to man, the Journey.”17 George Lucas, writer/director of Star Wars, credited Campbell’s influence in achieving this effect that resonated with popular audiences to make Star Wars the most successful film to that point.

Campbell’s theory of transcendence is based on a Freudian/Jungian understanding of the subconscious mind, and he asserts that evolutionary psychology has created this system by which humans subconsciously crave storytelling to provide meaning. This understanding is adequate to understand the feelings of transcendence associated with a select few types of phenomena, like the receiving of a “hero’s journey” story, but inadequate to explain those things that are not evolutionary beneficial. Joan of Arc’s transcendent experience led her not to “evolutionary success,” but martyrdom.

Catholic philosopher Stephen Fields attempts to explain transcendent experience through the grace of Jesus Christ. In his model, Christ is the standard to which all aesthetics strives, and his grace acts through us when we experience art that imitates Christ in some way:

“If the intrinsic structure of reality is radiated forth in the incarnation, then all deeds and words of the particular Jewish man Jesus must reveal, apriori, the divine standard of beauty. In other words, if God is beautiful by definition, and if Christ is God, then the acts of Christ must set the first principles of authentic aesthetics. It follows that Christ’s beauty must accordingly judge, or cast into a shadow, the beauty of all other created forms.”18

The wider scope of transcendent experience; heroic tales, other religious art, secular art, or the beauty of nature, are understood in this model as falling short of the artistic standard of depicting the Passion of Jesus Christ, and only have artistic value in so far as they share in his grace. This model, however, fails to adequately explain the richness contained in the wider scope of transcendent experience, or why those without a Christian understanding are unmoved by the images of him that represent the perfect artistic ideal.

The understanding of transcendent experience as a special class of mental phenomenon is essential to understanding its value to human knowledge and to explain why transcendent experiences disagree. The diversity of religious views and understandings demonstrates this problem, as does the inability of transcendent experience to address fine details of theology and metaphysics.

When revelation like that received by followers of Ann Lee as mentioned appears to be contradicted by that of revelation received by the prophet Joseph Smith, it is not, in fact, a contradiction of revelation. It is rather a difference in perception and interpretation, similar to the differences in perception of ordinary sensory phenomena. In our perceptions of the transcendent, “The fault… is not in our stars, But in ourselves.”19

Transcendence is not felt or understood uniformly because it is a phenomenological experience, similar to the mental experiences of seeing a color or hearing music. There is no transcendent experience outside human phenomenology. Transcendent experience is a feature of the conscious mind, not of scientifically reducible brain matter. But it reflects the physical reality of “higher matter” just as ordinary sensory experience reflects the reality of matter as we ordinarily understand it.

Though we as humans do not perceive everything identically, our subjective phenomenological sensory experiences can still collectively show underlying truths. For example, we know that even if two people disagree on whether a certain piece of music is good or bad, they do agree on the underlying fact that there is some kind of sound being made. The subjective phenomenological experience of hearing music is evidence of the concrete fact that sound waves are traveling nearby.

We disagree even on the nature of our consciousness itself, the thing which we are phenomenologically closest to. Physical reductivists like Russell observe their own consciousness and see nothing beyond the mechanical, or at least nothing great enough to overcome their worldview that denies the transcendent self. But when the spiritually attuned observe their consciousnesses they perceive God through transcendence. As Truman Madsen, late BYU emeritus professor of religion and philosophy, says:         

“One begins mortality with the veil drawn, but slowly he is moved to penetrate the veil within himself. He is, in time, led to seek the “holy of holies” within the temple of his own being… There is inspired introspection. As we move through life, half-defined recollections and faint but sometimes vivid outlines combine to bring a familiar tone or ring to our experience. One feels at times at home in a universe which, for all that is grotesque and bitter, yet has meaning.”20

Everything witnesses there is a God when questioned by the intelligence of mankind because everything material has underlying intelligence of its own, composed of the finer matter. This is transcendence in its fullest sense, and it is how we understand God and our nature. Religious experience in all its forms is phenomenological, giving it commonality with our everyday sensory perception, but it is also transcendent, giving us access to the “finer matter” beyond the world of our ordinary understanding.


Endnotes:

1. Doctrine and Covenants 49:23 (1981 Edition).

2. “Revelation” in Guide to the Scriptures (Salt Lake City: Intellectual Reserve, Inc., 2013), https://www.churchofjesuschrist.org/study/scriptures/gs/introduction?lang=eng. See also Matthew 16:18 (KJV).

3. Doctrine and Covenants 42:61 (1981 Edition).

4. Louis P. Pojman, Philosophy of Religion (Long Grove, IL: Waveland, 2001), 57.

5. William James, The Varieties of Religious Experience (1902; repr., Grand Rapids, MI: Christian Classics Ethereal Library, 2005), 12, https://www.ccel.org/ccel/james/varieties.pdf.

6. Bertrand Russell, “The Finality of Death” in Philosophy of Religion, An Anthology, ed. Louis P. Pojman and Michael Rea (Belmont, CA: Wadsworth, 2008), 337.

7. The Tommy Edison Experience, “Describing Colors As A Blind Person,” YouTube Video, 2:39, December 4, 2012, https://www.youtube.com/watch?v=59YN8_lg6-U.

8. David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (New York: Oxford University Press, 1996), 100.

9. See Robert Kirk, “Zombies”, The Stanford Encyclopedia of Philosophy (Spring 2019 Edition), https://plato.stanford.edu/archives/spr2019/entries/zombies/.

10. Thomas Nagel, “What is it Like to Be a Bat?” The Philosophical Review LXXXIII, no. 4 (October 1974): 435–50.

11. Thomas Nagel, Mind and Cosmos: Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False (New York: Oxford University Press, 2012), 122-123.

12. Doctrine and Covenants 93:29-30 (1981 Edition).

13. Evelyn Underhill, Mysticism: A Study in the Nature and Development of Spiritual Consciousness (1911; repr., Grand Rapids, MI: Christian Classics Ethereal Library, 2005), 78, http://www.ccel.org/ccel/underhill/mysticism.pdf?membership_type=b10f8d8331236b8b61aa39bc6f86075c12d7e005.

14. The Book of Mormon: Another Testament of Jesus Christ, Alma 30:44 (1981 Edition).

15. Doctrine and Covenants 131:7-8 (1981 Edition).

16. Joseph Campbell, The Hero with a Thousand Faces (1949; repr., Novato, CA: New World Library, 2008), 21.

17. Roger Ebert, “Star Wars” (Chicago Sun-Times, 1977), https://www.rogerebert.com/reviews/star-wars-1977.

18. Stephen Fields, Analogies of Transcendence (Washington, D.C.: Catholic University of America Press, 2016), 156.

19. William Shakespeare, Julius Caesar I.II.147-148.

20. Truman G. Madsen, Eternal Man (Salt Lake City: Deseret Book Company, 1966), 20.


Top image: A Group of Shakers, from an 1875 woodcut.

Hume’s Slight-of-Hand Skepticism

“A miracle may be accurately defined, a transgression of a law of nature by a particular volition of the Deity, or by the interposition of some invisible agent.”1 When David Hume wrote this definition in An Enquiry Concerning Human Understanding, he said he had “discovered an argument of a like nature, which, if just, will, with the wise and learned, be an everlasting check to all kinds of superstitious delusion, and consequently, will be useful as long as the world endures.”2 This argument, however, is reliant on the rhetorically loaded definition he invented, which ultimately disproves something no one “wise and learned” actually believed.

Hume argues that miracles are logically impossible, because they are defined as breaking the laws of nature, and everyone knows through the sum of their empirical observation that breaking the laws of nature is impossible. He says that even if a miracle has a multitude of personal eyewitnesses, their testimony must be weighed against that of the sensory data we have received over a lifetime asserting that the natural laws are consistent and unbreakable and necessary for the continued function of the world and our sciences.

A God who created the world and its initial conditions with perfect knowledge of the outcomes would have no need of “miracles,” as defined by Hume. Hume uses the word “marvel” for what any other English speaker would call a “miracle.” Hume’s definition sets up a strawman version of the belief in miracles for him disprove.

“But in order to encrease the probability against the testimony of witnesses, let us suppose, that the fact, which they affirm, instead of being only marvellous, is really miraculous; and suppose also, that the testimony considered apart and in itself, amounts to an entire proof; in that case, there is proof against proof, of which the strongest must prevail,…”3

This method of reconciling contradictory proofs is epistemologically bankrupt. If two contradictory things seem to be proven, it is illogical to weigh which one is more proven than the other. When contradictory proofs appear, it should be obvious that one or both proofs are flawed, or, more likely, that the terms on which the proofs are evaluated are inconsistently used or poorly defined. Ignoring the problematic use of the word “proof” to evaluate empirical data, it’s quite clear that the words “marvelous” and “miraculous” are being used dishonestly.

The phenomena Hume classifies as “marvelous” are what typical English-speaking Christians would call “miraculous.” But in assigning a new definition to the term, Hume creates the appearance of a logical contradiction:

1. Impossible (Hume’s definition of “miraculous”) events have been witnessed.
2. Impossible events cannot occur
3. Therefore, the witness is invalid

If we use some definition of miracle that is more consistent with the claims of the believer, we would not assert that impossible events have been witnessed, only events that are unexplainable, or explainable only to God or a mystic. This removes the illusion of a logical contradiction:

1. Unexplainable (layman’s definition of “miraculous”) events have been witnessed.
2. Impossible events cannot occur.

Hume’s argument against miracles is only fatal to a conception of God and miracles that very few people, if any, actually hold: that God is subject to natural laws, that he did not create the world or as least did not fully anticipate the consequences of doing so, and that he cannot create miracles consistent with natural laws. It’s an argument that still shows up in contemporary atheism now and then, and it’s this kind of rhetorical slight of hand that demonstrates why we should always “doubt our doubts,” as President Uchtdorf counseled. We should always look for people being sneaky with their terms, and treat them like we would treat a magician’s trick.


Note: This essay was written in part for BYU’s Philosophy of Religion course (PHIL 215) in August of 2019 and was revised in June of 2021 for publication online. For a refutation of Hume’s epistemological framework and a proposed alternative, see my essay “Hume and True Skepticism: How Do We Know?


Endnotes:

1. David Hume, An Enquiry Concerning Human Understanding (1748; repr., Project Gutenberg, 2003), Footnote 22, https://www.gutenberg.org/files/9662/9662-h/9662-h.htm.

2. Hume, Enquiry, 86.

3. Hume, Enquiry, 90.


Top Image: The Incredulity of Saint Thomas by Caravaggio, circa 1601-1602.

The Necessity of Suffering in Frankl’s Existentialism

Victor E. Frankl’s experiences in the concentration camps, detailed in Man’s Search for Meaning, have enduring significance and popularity among contemporary Christians and Jews. They demonstrate that suffering can be the source of meaning, but Frankl stops short of saying that suffering is an essential: “But let me make it perfectly clear that in no way is suffering necessary to find meaning. I only insist that meaning is possible even in spite of suffering–provided, certainly, that the suffering is unavoidable.”1 He argues that there are other sources from which man can discover meaning, “There are three main avenues on which one arrives at meaning in life. The first is by creating a work or by doing a deed. The second is by experiencing something or encountering someone…”2

Accomplishment, appreciation of art, and interpersonal relationships, then, are sources of meaning independent from suffering, according to Frankl. Frankl is considered a religious existentialist because he thinks deeply and examines the meaning of life and existence itself, and we can therefore examine and expand on the existentialist philosophy he created. In examining each of the sources of meaning Frankl identified, we can see that not only do they contain suffering in various forms (usually less severe and more subtle then that of the concentration camps), but that the suffering endemic to these sources is necessary for them to possess meaning.

The first source which Frankl names is the accomplishment of creating a work or doing a deed. The writer who has stared down a blank page struggling to find words and the academic who studies a lifetime to make a contribution to their field are both uniquely familiar with the true scope of their respective accomplishments once complete. Those who have not endured the suffering of bullying the brain to keep writing when it wants nothing but to quit cannot understand their accomplishment. But the person who has suffered to bring about an accomplishment knows its value and can truly appreciate it.

This suffering exalts and gives meaning to the purpose it is directed toward. We can see this even in more trivial accomplishments, like in winning a video game. Certain video games have options to increase or decrease their difficulty, and high difficulties result in a certain level of suffering to the player, as the player endures mental strain and frustrating failures before reaching their goal. Why does the player ever set the difficulty above the easiest setting? Because the difficulty gives meaning to the accomplishment of finishing the game, even if the difficulty is voluntary and minor compared to other sufferings of life.

The second source of meaning, in part, is “experiencing something,” or beauty and art. We can see that art involves certain subtle or vicarious forms of suffering, which gives it meaning. The richest works of art often deal with tragic subjects; Oedipus, Hamlet, Götterdämmerung, Citizen Kane, and The Godfather are each reliant on some form of empathetic or vicarious suffering on the part of the viewer to give the work emotional meaning. This applies to all art in some form. Those who study the theory of comedy find that it is based on pain examined in new perspective. A world without suffering, or at least minor inconvenience, would be devoid of all humor. Even the contemplation of perfect beauty involves a different, more subtle suffering; a hopeless longing to somehow be a part of that beauty which can never be fulfilled. The Book of Mormon is filled with accounts of wars and even genocides, which are a key part of how it teaches us about suffering, sin, and redemption through Christ.

The second part of this source is what Frankl calls “encountering something,” or love and friendship. Once again, this is an area of life that is not free of suffering, and it may be the case that love and friendship are made real by sacrifice. The most meaningful relationships in most lives are those with one’s spouse and children, those people for whom most is sacrificed.

In Fahrenheit 451, Ray Bradbury says that the importance of a book comes from its texture; “To me it means texture… They show the pores in the surface of life. The comfortable people want only wax moon faces, poreless, hairless, expressionless.”3 This is a principle that can apply suffering to all life; that if anything is to have meaning it must have texture, or some amount of suffering.

We believe in a God who suffers. Christ suffered tremendously as part of the atonement, but our Heavenly Father suffers as well because of the iniquities of his children. His status as a God entitles him to both the extreme joy of bringing about “the immortality and eternal life of man,” but also consigns him to the suffering of a worried parent.

When Frankl qualified his thesis in saying that suffering was not necessary to find meaning, he may not have wanted to compare the relatively trivial sufferings of everyday life to those of the concentration camp, or to say that maximizing suffering was the path to meaning. It seems from these examples that strategically minimal suffering can maximize its return of meaning, and that the path for those who follow Frankl’s tradition of existentialism is to minimize unnecessary suffering, to extract all possible meaning from the suffering that cannot be avoided, and to be willing to pursue meaning despite the suffering that may accompany the pursuit.


Note: This essay was written in part for BYU’s Philosophy of Religion course (PHIL 215) in August of 2019 and was revised in June of 2021 for publication online. I highly recommend reading Man’s Search for Meaning. Not only is it one of the most compelling accounts of the Nazi concentration camps ever written but, it also examines the experience through Frankl’s unique psychological and philosophical lens.


Endnotes:

1. Viktor E. Frankl, Man’s Search for Meaning (1946; repr., Boston: Beacon, 2014), 106.

2. Frankl, Man’s Search, 137.

3. Ray Bradbury, Fahrenheit 451 (1951; repr., New York: Simon & Schuster, 2013), 79


Top image: A prayer room at Theresienstadt, painting by Malva Schalek, circa 1942-1944. Viktor Frankl and his family were sent to the Theresienstadt concentration camp in 1942, where his father died. He was later sent to Auschwitz in 1944.

Autonomy, Power, and the Possible: A Brief Intellectual History

Preface: This essay was written for BYU’s History of Ideas course (HIST 312) in April of 2021. The recent attempts of the certain parts of the intellectual left to pretend that cultural Marxism doesn’t exist has made it suddenly relevant, as it’s necessary to place the stages of modern and postmodern Marxism in their context within the narrative of freedom and power. I’ve made some minor changes to this essay to highlight that. Book references are included in parenthesis rather than as end notes to make them simple to identify.


Is human autonomy possible?

If so, to what degree? And how do we recognize autonomy when we have gained it?

The starting point for such a question depends on what autonomy is. We have different words for it: freedom, liberty, self-determination, liberation. All of these words seem to be getting at a concept that humans understand and have a natural inclination to pursue, and yet we disagree on what it is. This appears to be more than just an argument over definitions, as it has been an issue since before the days of Socrates and has been argued in every language common to western philosophical writing.

Isaiah Berlin makes a well-known set of distinctions that is useful for classifying the two main threads of thought into which the “over two hundred senses” of liberty could be separated (Isaiah Berlin, Two Concepts of Liberty, 1). Liberty in the “liberal” sense, in the tradition of Thomas Hobbes, John Locke, and the writers of the United States’ Declaration of Independence, Constitution, and Federalist Papers, is what Berlin calls “negative liberty.” Autonomy in this sense is the freedom from violence and violent threats against an individual’s “life, health, liberty, or possessions” (Locke, An Essay Concerning Human Understanding, § 6). The second set of thought systems concerning autonomy is “positive liberty,” which takes many forms and includes the systems of Aristotle, Rousseau, Hegel, and of Marx and his followers. It is generally more nebulous and difficult to define and recognize than negative liberty but can be generally characterized as looking for freedom from influence. This could be the influence of our baser instincts that distract us from the pursuit of collective civic virtue in the understanding of Aristotle or Rousseau, or it could be the influence of economic or other external forms of power in Marxist, post-Marxist, Neo-Marxist doctrine.

A common rule of thumb for this distinction is that negative liberty is “freedom from,” while positive liberty is “freedom to.” But the rhetorical thrust in the systems of Marx and his various followers is also on “freedom from.” Not freedom from acts of violent aggression – attacks on “life, liberty, and property” as Locke and his followers might define it – but freedom from influence, or “power.” Freedom of speech can be conceived as freedom to speak, as though it were a positive right, but is typically legally understood as a freedom from deprivation of life, liberty, or property in retaliation for one’s speech. A better version of the rule of thumb, therefore could be that negative liberty is freedom from aggression, whereas positive liberty is freedom from influence. 

Thinkers in the tradition of positive liberty before Marx focused on the influence of man’s base, selfish instincts, while Marx focused on the influence of economic power. Later neo-Marxists and post-Marxists identified and criticized many other forms of power, which they then related to the economic order in various ways to remain in the Marxist tradition. This dominant contemporary conception of influence and power was formalized in the work of the mid-Twentieth Century French critic Michel Foucault, who said that the modern struggle for autonomy “is a question of orienting ourselves to a conception of power which replaces… the privilege of sovereignty with the analysis of a multiple and mobile field of force relations” (Michel Foucault, The History of Sexuality, 102). This conception of power “is everywhere; not because it embraces everything, but because it comes from everywhere” (Sexuality, 93).

Given these two radically different understandings of human autonomy, the degree to which autonomy is possible and the approach to achieve or to get closer to it depends on each individual thinker’s conception of the forces that oppose autonomy. Among the theorists of negative liberty, autonomy is possible insofar as it is possible to curtail violence against life, liberty, and property while limiting the extent to which the state threatens them, usually by placing the individual and the state within a social contract. Among the theorists of positive liberty, autonomy requires the elimination of all power. To some of these thinkers, power is an inevitable part of human life which we can fight against in a “perpetual revolution” but never eliminate, while to others, power can eventually be eliminated as when we reach the “absolute,” that end point of the Hegelian/Marxist dialectical process that represents utopia, the end of history, and perfect autonomy.

The concept of negative liberty certainly wasn’t created in the modern period, but the emergence of the individual as a focus of thought during the enlightenment allowed for the refinement of negative liberty into its modern form. In the modern period negative liberty is primarily associated with British and American thinkers and with the empiricist and pragmatist epistemological traditions. There are multiple epistemological and metaphysical outlooks that can lead to a conclusion of negative liberty, and the only metaphysical stance necessary for a theory of negative liberty is some conception as the individual as subject.

Thomas Hobbes and John Locke, while both British Empiricists, differ fundamentally on their understanding of human nature and on their ultimate conclusions regarding government. But they both agree on the basic question of what freedom is. Hobbes accepts the notion of autonomy as “the absence of external impediments,” though he believes that without a state, a Leviathan to limit freedom, people would live in a constant state of war, their freedom paradoxically leading to mass deprivations of freedom by others (Thomas Hobbes, Leviathan, Ch. 13). Without surrendering freedom to the absolute power of the Leviathan via the social contract, freedom is even more radically limited. As Hobbes infamously says, in this state life is “solitary, poor, nasty, brutish, and short” (Leviathan, Ch. 13). Only limited human autonomy is then possible, according to Hobbes, as it is compromised by constant war in the state of nature or must be given up to the ruler as a part of the social contract.

Locke’s definition of autonomy may not be accepted as a universal within intellectual circles, but it is the premise beneath many political traditions to come after him. He agrees with the principle behind negative freedom, though he tries to expand the definition in a way that further excludes the possibility of being interpreted as positive freedom, saying that “no one ought to harm another in his life, health, liberty, or possessions” (Human Understanding, § 6). Locke makes clear that liberty is the right not to be subjected to aggression or threats of aggression, but it does not include the “license” to subject another to aggression or threats of aggression. Locke’s conception of liberty is useful because its particularly simple to identify when someone is free by whether they are being subjected to attack or threat of attack (except as retaliation or the possibility of retaliation for their own acts of aggression). This is the liberty referenced in the Declaration of Independence and by the anti-slavery abolition movements.

Locke differs from Hobbes in his understanding of human nature. He believes that the state of nature is one of autonomy already, though there exists the risk of attack against that autonomy. But it is not one of constant war, as Hobbes believes. Locke says that in the state of nature, each individual is tasked with the preservation of liberty by exercising their “right to punish the offender and be executioner of the law of nature” (Human Understanding, § 7). This right is part of their autonomy, and it is a right they give up to the state upon entering a social contract. When one is a member of a social contract which they have consented to, they even then are arguably perfectly autonomous in Locke’s understanding.

Later thinkers in the tradition of Locke have focused on discovering how liberty might be preserved while sacrificing a minimum of one’s own autonomy to the state or even rejecting the social contract as non-consensual and therefore incompatible with autonomy. After the rise of Marxist thought, thinkers working in the paradigm of negative liberty ranging from John Stuart Mill or Lysander Spooner in the nineteenth century and up to Robert Nozick or Murray Rothbard in the late twentieth century were relegated to the fringes outside the mainstream of Western academic philosophy.

“Man is born free, but everywhere he is in chains” (Jean-Jacques Rousseau, Of the Social Contract, Bk. 1, Ch. 1). Rousseau’s assertion begins the modern understanding of positive liberty, and thinkers and academics from him to Foucault three hundred years later have dedicated their careers to the analysis of the forms these chains take. Rousseau thinks of liberty in an Aristotelian sense, focusing on what he calls “civil liberty” or “moral liberty, which alone makes him truly master of himself” (Social Contract, Bk. 1, Ch. 8). He acknowledges the existence of negative liberty as a “natural independence” but believes that civil liberty is the ultimate form of human autonomy which is can only be gained in entering a social contract (Social Contract, Bk. 2, Ch. 4).

Rousseau’s civil liberty is the liberty of a citizen. “Individual self-interest may speak to him quite differently from how the common interest does,” so that where they conflict, “each individual will be forced to be free” (Social Contract, Bk. 1, Ch. 7). The common interest, or “general will,” represents man’s true interest, and to comply with and support that general will is to be free from those chains. The obvious political problems that come from discovering the general will and forcing compliance with it mean that human autonomy waxes and wanes with the life cycle of each government. “The body politic, like the human body, begins to die as soon as it is born, and carries in itself the causes of its destruction” (Social Contract, Bk. 3, Ch. 11).

Hegel represents a turning point in thought, not merely because of his influence on Marx, but also due to the implications of his dialectical method. In Hegel’s dialectical conception of society, the particular and the universal are essential members of a relationship that makes them complete, or “absolute.” This is the dialectical approach that Hegel applies to every basic relationship in metaphysics. This approach allows for both negative and positive liberty, as the liberty of both the individual (the particular) and society (the universal) are necessary for absolute liberty.

“All the qualities of Spirit exist only through Freedom; that all are but means for attaining Freedom” (Hegel, Philosophy of History, 77). This conception differs from that of Rousseau, as both freedom of the individual and of society are necessary for absolute freedom, and therefore forcing someone to be free would not be compatible with absolute freedom. Hegel believes that all history is a process of realizing and refining autonomy, and that the disagreement between Locke and Rousseau on the meaning of freedom will be reconciled in the ultimate realization of autonomy.

The Hegelian approach to the history of ideas means that ultimate, absolute autonomy is possible when society reaches the “absolute.” This is the state in which the spirit, meaning both the spirit of the individual and the spirit of society, exists “in and for itself.” Hegel believes that all aspects of society follow this approach of dialectic progression, that science, philosophy, art, law, politics, and any other matters of contention are constantly being refined and will eventually reach an end state, the final “synthesis” or “absolute.” True and complete autonomy is only possible in this final state of synthesis, as freedom is an “indefinite, and incalculable ambiguous term” that can only be understood within the absolute (Philosophy of History, 79).

Marx inherits Hegel’s belief that history is undergoing a process to achieve a final state of synthesis, though he “turns Hegel on his head” by focusing on the economic processes he believes determine changes in political and intellectual life. His process is the beginning of a unique trend in that he focuses on the ultimate causes of the forces which exercise influence on people against their autonomy. He believes that workers in the capitalist system are slaves due to both the influence the employer has by offering them wages to influence them to give up the fruits of their labor as well as the influence the capitalist system as a whole as on “consciousness.” There is no state of nature or social contract in Marxist thought, because humans are fundamentally economic creatures whose behavior has always been influenced by economic demands, even when they were just the economic demands of the household.

A truly autonomous person, says Marx, is not influenced by the lure of wages or the economic needs of society. An autonomous person can “do one thing today and another tomorrow, to hunt in the morning, fish in the afternoon, rear cattle in the evening, criticize after dinner… without ever becoming hunter, fisherman, shepherd or critic” (Karl Marx, “The German Ideology” in Marx-Engels Reader, 160). As long as there are economic factors that influence his work, he is not truly autonomous. But Marx believes that after the overthrow of capitalism, in the Communist society that constitutes the “absolute” theorized by Hegel’s approach, there will be no economic factors influencing people’s behavior, and they will therefore be completely free.  

Autonomy is not just a possibility for Marx, it is an inevitability, though it can only exist in that “absolute” state after the workers inevitably overthrow capitalism and establish a state without property. Marx believes that in this state there will be no economic influences, and therefore no other kind of influences, acting on anyone, as politics, law, and culture are downstream of and dependent on economics.

Marx’s understanding of autonomy as the absence of power or influence was tremendously influential in academia, though the historical events of the first half of the twentieth century would cause a crisis for the concept of Hegelian/Marxist progression. The Hegelian belief that society was progressing toward a stable end state of absolute justice was seriously threatened when the Second World War showed that the First World War was not the “war to end all wars,” but the beginning of a period of over 200 million excess deaths in less than thirty years, setting up not a stable peace but another potential showdown between the U.S. and the U.S.S.R. in its immediate aftermath. In Bolshevik Russia and Maoist China, it appeared that communism was obviously not a final stage of economic progression, but a political movement. The Marxist thinkers in the post-war period also had to explain why communist nations appeared to have oppression without economic oppression, which shouldn’t be possible in the Marxist framework. Put simply, the first problem is whether positive freedom, the absence of all power/influence, was still to be considered inevitable or even possible, and the second is whether economic power is the fundamental form of power/influence.

Antonio Gramsci was confronted directly by these problems as a prisoner of the regime of Mussolini, who had once a been member of the National Directorate of the Italian Socialist Party and yet had created a regime that was clearly not the absolute state Marx predicts and had even greater oppressions than ever before. Gramsci, a Marxist, believes the missing explanatory factors in Marxist thought are culture and the intellectual class responsible for it. Cultural influence is still downstream from economic influence, according to Gramsci, but the bourgeoisie, through their control over culture, have the ability to delay or redirect the natural reaction to their economic oppression by indoctrinating the proletariat with their bourgeois morality. “The intellectuals are the dominant group’s “deputies” exercising the subaltern functions of social hegemony and political government” (Antonio Gramsci, “The Intellectuals” in Selections from Prison Notebooks, 145).

Theodor Adorno, a German neo-Marxist of the “Frankfurt School” working out of the University of Columbia during and after WWII, follows and expands on Gramsci’s insight regarding culture in the capitalist world. He is aggressively critical of negative freedom, saying that “freedom to choose an ideology, which always reflects economic coercion, everywhere proves to be freedom to be the same” (Theodor Adorno, “The Culture Industry: Enlightenment as Mass Deception” in Dialectic of Enlightenment: Philosophical Fragments, 135). Culture, for Adorno, is also a means of influence which threatens human autonomy.

The project of both Adorno and Gramsci is to identify culture as a causal factor in the progress of history. As Marx believes that capitalism is an economic race to the bottom, in which things get progressively worse for the proletariat until the proletariat inevitably retaliates, leading to the final synthesis, Adorno believes that there is a similar race to the bottom in culture. “Today, works of art, suitably packaged like political slogans, are pressed on a reluctant public at reduced prices by the culture industry” (“Culture Industry,” 133). If we understand the process of cultural decline for the proletariat Adorno believes is linked with capitalism as the same process of economic decline for the proletariat Marx believes is linked with capitalism, then we can predict what must happen in culture if it follows the Marxist pattern in economics. Eventually, the proletariat must rise up against the culture industry, which, according to Adorno, “they recognize as false” (“Culture Industry,” 136). The inevitability of a final absolute state, in which all power/influence against autonomy is overthrown, is thereby resurrected in Adorno’s system, because the true revolution will be against the capitalist “culture industry.” This is why Adorno and the Frankfurt school can be referred to as the “neo-Marxists” or “cultural Marxists.”

In traditional Marxism, the economy is the structure on which everything else in society – culture, science, philosophy, religion, etc. ­– is built. Economics is the structure, everything else is the “superstructure.” Changes to the structure precede changes to the superstructure, but changes to the superstructure cannot change the base structure itself. Cultural Marxism is simply any strain of Marxism that considers culture to be part of the base structure rather than the superstructure. It asserts that the overthrow of capitalist culture is part of the overthrow of the capitalist economy, not an after effect of that overthrow. This intellectual framework is what makes Adorno a “cultural Marxist” rather than just a “Marxist cultural critic.”

The cultural Marxist framework might provide an answer to the first of Marxism’s two Twentieth-Century problems, that positive freedom as the absence of all power/influence can still be considered the final synthesis at the end of history, but Michel Foucault argued that there are still forms of oppression or influence that are not fully accounted for by economics and culture. Foucault believes that “it is in discourse that power and knowledge are joined together,” and he commonly uses the term “power-knowledge” to show that they are fundamentally connected (Sexuality, 100).

In his History of Sexuality, Foucault claims that these discourses shape human identity using the example of homosexuality. He points out that the category of “homosexual” was developed as part of Victorian morality, and saying that “The machinery of power that focused on this whole alien strain did not aim to suppress it, but rather to give it an analytical, visible, and permanent reality: it was implanted in bodies, slipped in beneath modes of conduct, made into a principle of classification and intelligibility, established as a raison d’etre and a natural order of disorder… The strategy behind this dissemination was to strew reality with them and incorporate them into the individual” (Sexuality, 44). Foucault believes that the identity of homosexuality, like all other identities, is created by discourse as an exercise of power.

Everything is systemic for Foucault, and the individual is just a node in the system, a construct of language and discourse. These power-relations are not simply an act of oppression of one person or group against another person or group as they are in the Marxist tradition or even in the tradition of negative freedom. Even though “power relations… are imbued, through and through, with calculation: there is no power that is exercised without a series of aims and objectives. But this does not mean that it results from the choice of decision of an individual subject” (Sexuality, 95).

Foucault’s analysis of power describes the means by which we might resist or overthrow and transform it, but we ultimately cannot eliminate power, because power is knowledge. Power-knowledge certainly isn’t caught in a race to the bottom that will lead to the inevitable destruction of all power-knowledge as capitalism and the cultural industry are claimed to be in Marxism and neo-Marxism. Human autonomy is not inevitable in Foucault’s system, which is why he can be categorized as a post-Marxist. So to what degree is autonomy even possible in this now widespread Foucauldian worldview? Our freedom is limited by the range of options made possible within the context of the various discourses or power relations we exist at the nexus at. The Foucauldian system leaves us without even a criterion by which we can tell whether or not we are autonomous, as any knowledge of one’s autonomy is power-knowledge, and therefore subject to the influences of outside discourse.

Autonomy in the sense of negative freedom is easy to identify, if not always politically possible to achieve. Autonomy in sense of positive freedom before Hegel and Marx, as used by Rousseau and other earlier thinkers, is similarly identifiable and achievable, though difficult. In the systems of dialectical progression of history, Hegelianism, Marxism, and neo-Marxism, autonomy is not only achievable but inevitable. But in the post-Marxist sense, in which all sources of power and influence must be overthrown as forces against positive liberty, autonomy is ultimately not possible, though it remains something we must strive for anyway. This last understanding of autonomy is more radically different from the others than Locke’s negative liberty is different from Rousseau’s positive liberty, as it calls for a state of “perpetual revolution” as Mao put it. It calls for nothing less than the imperative to constantly redefine autonomy and then overthrow that new definition as another iteration of power-knowledge.


Top Image: “The Triumph of the Guillotine in Hell” by Nicolas Antoine Taunay, 1795

Foundation of Our Faith: The First Vision in Church Publication and Film

In 1911, the Danish silent crime film A Victim of the Mormons was a huge success in the United States and England, initiating a decade of sensationalistic anti-Mormon motion pictures in popular cinema with lurid titles like Marriage or Death and Trapped by the Mormons.1 A Victim of the Mormons was a wild tale of a Danish girl seduced and kidnapped by missionaries and taken to Utah to be married to a villainous polygamist before she is rescued by the Danish hero.2

Trade journal advertisement for A Victim of the Mormons (1911)

In the early years of cinema, the typical response of any institution that objected to the content of a motion picture was an attempt to ban the film, one state and distributor at a time if necessary. Various interest groups lobbying to ban pieces of media was a common and acceptable practice before the 1950s. But the efforts of leaders and connected members of The Church of Jesus Christ of Latter-Day Saints (known colloquially as the Mormon Church or the LDS Church, hereafter referred to as simply ‘the Church’) to ban the film had failed even in the state of Utah, and served only to draw more publicity to the A Victim of the Mormons. Therefore, in June of 1912 Church leaders planned a different response to the new wave of anti-Mormon film: they would sponsor their own motion picture.3

Newspaper ads for One Hundred Years of Mormonism (1913)

One Hundred Years of Mormonism told the story of the Church from the birth of its founder Joseph Smith to the trek west of the 1840s and 1850s. It was an ambitious undertaking, made at a then impressive cost of $50,000 and running six reels—making it one of the longest films made at the time. The now lost film included the 1844 martyrdom of Joseph and Hyrum Smith, staged multiple elaborate and expensive scenes of hundreds of pioneers trekking west, and used double-exposure photography to depict the 1823 appearance of the angel Moroni to Joseph Smith described in the introduction of the Book of Mormon.4 But the feature length motion picture omitted an event modern members and leaders of the Church consider central to their faith, secondary only to the death and resurrection of Jesus Christ—the First Vision of Joseph Smith.5 It wouldn’t be until 1976 that the event would be depicted in narrative film.

One of the Few surviving images from “A Hundred Years of Mormonism” This is one of the earliest uses of double-exposure in film special effects, a technique that had recently been pioneered by French filmmaker Georges Méliès in the early 1900s.

Gordon B. Hinckley, President of the Church from 1995 to 2008, said in a 2002 General Conference, “We declare without equivocation that God the Father and His Son, the Lord Jesus Christ, appeared in person to the boy Joseph Smith… Our whole strength rests on the validity of that vision.”6 Moreover, beginning in the early 1960s, Church missionaries have been expected to memorize and recite the portions of Joseph Smith’s account of the First Vision as the key part of their first lesson.7

The First Vision is used as the modern Church’s primary symbol of the power of prayer, the founding of the Church (“the Restoration”), the nontrinitarian nature of the Godhead, and of the prophetic calling of Joseph Smith. This, however, was not always the case. The story of the First Vision was almost entirely unknown in the early Church and did not occupy its current high place in the teachings and culture of Church until the middle of the twentieth century.8 Just as the Church created One Hundred Years of Mormonism in response to cinematic critics, we see a similar pattern of criticism and response shaping the way the First Vision story was and continues to be told in the Church, which this paper will examine.

The First Vision in Historiography

James B. Allen, former Assistant Church Historian and professor of history at Brigham Young University, has done the most extensive research on “The Expanding Role of Joseph Smith’s First Vision in Mormon Religious Thought,” as he titled a 1980 paper. He attributes the “metamorphosis” to the teachings of Church leaders and theologians in the decades after the martyrdom of Joseph Smith in 1847, particularly those of George Q. Cannon, Church apostle from 1860 to his death in 1901. It was Cannon’s teachings, Allen argues, that prompted the first wave of depictions of the First Vision in works by Church artists in the late 1870s.9 It was C.C.A. Christiansen’s now lost painting “Mormon Panorama One/The First Vision” that inspired George Manwaring to write “Joseph Smith’s First Prayer” (“O How Lovely Was the Morning”) in 1878, which would eventually be included in the Church Hymnbook seventy years later.10

Allen describes the course of the First Vision story from obscurity in 1830 to relative prominence in the late nineteenth century, and says that “from there the story of the First Vision as a fundamental theme in the presentation of Mormon doctrine only expanded upon the pattern established by the artists, preachers, and writers of the 1880s.” In this paper, I will further explore this expansion, particularly where it involves the Church’s expanding filmmaking efforts and missionary work.

I will also examine the changing emphasis on aspects of the story as seen in the examples of Church films depicting the First Vision. The creation of One Hundred Years of Mormonism as a response to cinematic detractors shows a criticism and response relationship between the Church and its critics; and we will see how this relationship applies to the Church’s narrative of the Vision, particularly where the first First Vision film is concerned. It took until 1976 for a film featuring the First Vision to be made, but in the last 15 years (2004–2019) the church has made three films of high technical and artistic sophistication depicting the First Vision, and we can see a significant change in their respective narratives responding to new criticisms of Joseph Smith’s accounts.11

Earliest known artistic depiction of the First Vision. Woodcut by J. Hoey, 1873.

The “metamorphosis” was a paradigm shift, after which it becomes necessary to describe the place of the First Vision by addressing those occasions where it is omitted rather than those occasions it is included. It’s important to note that this change is one of emphasis, and in how a narrative is constructed from the historical accounts. Church doctrine regarding the Restoration did not change after its canonization in scripture in 1880, though lesson books, missionary manuals, and common topics of sermons did and undoubtedly will continue to change for the foreseeable future.

Two Narratives of the Genesis of the Church

Did the Restoration begin with Joseph Smith’s First Vision? Or did it begin with the Book of Mormon and the visions of the angel Moroni?

When The Church of Jesus Christ of Latter-Day Saints was founded April 6, 1830 in Fayette, New York, early members were typically converted by the teachings of the Book of Mormon and its origin story in the visitations of the angel Moroni to the young Joseph Smith.12 In his account, 16-year-old Joseph Smith was praying in his family’s home in upstate New York in 1823 when there appeared a heavenly messenger. The messenger said his name was Moroni, and that there was a book of ancient scripture buried in a hill nearby.13

This narrative gained early prominence in the Church, possibly because a people familiar with apocalyptic literature could have seen Moroni as being the angel “having the everlasting gospel to preach unto them that dwell on the earth” referenced in the Book of Revelation.14 He is the angel depicted in statue on the spire of most of the Church’s temples from 1893 to the present day and according to the Book of Mormon, Moroni is the son of the ancient historian for which the book is named.15 This is the story that would be dramatized in 1913 in One Hundred Years of Mormonism.

The angel Moroni depicted above the Utah Provo City Center Temple

The first image that present-day visitors to the main exhibit at the Church History Museum in Salt Lake City will encounter is a vibrant floor-to-ceiling photograph of a forest in rural New York. It was in this small forest, now known as the Sacred Grove, where Joseph Smith claimed to have seen his First Vision in the spring of 1820. According to his recollection written in 1838 and canonized as Church scripture in 1880, the fourteen-year-old Joseph was concerned with questions of religious uncertainty, owing to a series of Protestant revivals in his area which contended with each other over matters of doctrine.16 After reading in the Epistle of James, “If any of you lack wisdom, let him ask of God, that giveth to all men liberally, and upbraideth not; and it shall be given him,”17 he resolved to pray about the matter and retired to the woods now known as the Sacred Grove to do so, and describes the prayer thusly:

15 After I had retired to the place where I had previously designed to go, having looked around me, and finding myself alone, I kneeled down and began to offer up the desires of my heart to God….
16 …I saw a pillar of light exactly over my head, above the brightness of the sun, which descended gradually until it fell upon me….
17 …When the light rested upon me I saw two Personages, whose brightness and glory defy all description, standing above me in the air. One of them spake unto me, calling me by name and said, pointing to the other—This is My Beloved Son. Hear Him!
18 My object in going to inquire of the Lord was to know which of all the sects was right, that I might know which to join. No sooner, therefore, did I get possession of myself, so as to be able to speak, than I asked the Personages who stood above me in the light, which of all the sects was right (for at this time it had never entered into my heart that all were wrong)—and which I should join.
19 I was answered that I must join none of them, for they were all wrong…18

The story would not appear in print until 1840—twenty years later—when it was included in a pamphlet published in Scotland by early Church missionary Orson Pratt entitled A[n] Interesting Account of Several Remarkable Visions, and of the Late Discovery of Ancient American Records.19 Pratt was likely working from his memory of Joseph Smith’s 1832 account, as Joseph had wrote or dictated the story on five known occasions. It would be Joseph’s 1838 account that was published in 1851 as part of the Pearl of Great Price and canonized as Church scripture in 1880.20

Joseph Smith spoke little of the Vision in his own lifetime, and usually among friends, though in the last few years of his life he would include it in at least one sermon given in the home of a convert to the Church.21 The reasons of his relative silence on the matter can only be speculated on; Joseph may not have seen his vision as being substantially different from those reported in some of the wilder revivalist prayer meetings of the era before he came to terms with the scope and nature of his ministry, or that he believed personal conversions were of a more private nature than were prophetic revelation.22 Moreover, the account’s assertion of a nontrinitarian, corporeal God was a foreseeable cause for contention with potential converts. Though trinitarian belief was not universal in the nineteenth-century United States, it was (and continues to be) a majority belief among Protestant groups.23 But the Church shifted its narrative in the twentieth century to emphasize the First Vision and respond to the criticisms against it.

The First Vision depicted in stained class in the Salt Lake Temple, circa 1890

The First Vision in Twentieth-Century Missionary Work

Missionaries in the early Church preached without any standardized lessons or guidelines from the Church, teaching instead from the scriptures, their own knowledge and intuition, and the spirit of their testimony. In the first half of the twentieth century, some Church missions printed guidelines and pamphlets on their own initiative, and pamphlets by other writers were sometimes available for purchase by missionaries—but were not yet published or formalized by the Church.24

Before the Church published standardized lessons for use in the work of its full-time missionaries, it undertook an initiative to publish a series of filmstrips and scripts for accompanying narration for missionary use. For this purpose, the new Radio, Publicity, and Mission Literature Committee was formed in 1935 with future President of the Church Gordon B. Hinckley, then an aspiring journalist, as its first employee and Executive Secretary.25 Hinckley was noted among Church leaders for his keen sense for public relations and approaches to responding to criticism.

It would be seventeen years from the committee’s formation to the publication of a book of lesson manuals. In the meantime, Hinckley commissioned and wrote the accompanying texts for a series of filmstrips, reels of approximately fifty still images recorded on 35mm film to be projected while the missionary read a brief accompanying narration. While some of the later, more elaborate filmstrips included costumed actors or color illustrations, the earliest simply showed the locations corresponding to the events of Church history.

Landmarks of Church History, distributed beginning in 1936, includes an image of the Sacred Grove, though the images and text accompanying the angel Moroni narrative outnumber those concerning the Vision. In the accompanying script, however, Hinckley says that the “on the experience of that morning in the grove pivoted the eventful life of Joseph Smith and the lives of a million Latter-Day Saints.”26 This is language not far removed from his description of the importance of the event sixty-six years later as President of the Church.

But missionary practices at the time could vary, sometimes wildly, from mission to mission and even among missionaries in the same area. It is therefore useful to compare Hinckley’s understanding of the place of the First Vision to what was taught in other missions.

In the late 1940s, Richard L. Anderson, a missionary serving in the Northwestern States Mission, wrote A Plan for Effective Missionary Work for use in his mission. It would be adopted in other missions and at its peak around 1951 would be used by about sixty percent of missionaries in the Church.27 The lessons in the book take the form of hypothetical dialogues with interested potential converts, setting the precedent for the mission plans subsequently published by the Church. Anderson’s dialogues include the story leading up to the First Vision while omitting the vision itself, shifting the focus back to the angel Moroni story:

Elder Smith: “(In) 1820 Joseph Smith was a boy of fourteen years of age and in New York at the time ministers of many different churches held revival meetings and solicited membership in their churches. He desired to join one of the churches, but you can imagine how perplexed he was, trying to decide which one was the true church. In reading the Bible he found a promise that if he would ask of God in faith he would gain the answer. You believe in prayer, don’t you, Mrs. Jones?”
Mrs. Jones: “Yes I do.”
Elder Smith: “You can imagine the faith of that fourteen-year-old boy, in going into the woods and asking the Lord for the information he desired. In answer to those prayers, he received many direct visions—in our own generation! An angel actually stood beside his bedside September 21st, 1823, and said, ‘My name is Moroni…’”28

For the first half of the twentieth century, the story of the First Vision was being used in the Church’s missionary efforts, but not to establish the foundation of the Restoration. It was not yet used as the starting point or hook for potential converts, but rather as evidence to support the Church’s nontrinitarian theology. In A Systematic Program for Teaching the Gospel (1952), the first formal missionary lesson book published by the Church, the First Vision does not appear in the lesson on “The Restoration,” but the account of the visitation of the angel Moroni does.29 The First Vision does appear in this book in the lesson on “The Godhead” to provide three points of evidence:

A. This vision proves that God the Father and the Son have bodies similar in form to man and that they are separate and distinct.
B. It completely contradicts the sectarian concept of God.
C. The vision is conclusive evidence that Joseph Smith was a prophet.30

Unlike the discussions of the 1960s–1990s, these lessons were not formatted as dialogues nor were they intended to be memorized and recited by the missionary.31 This changed less than 10 years later with A Uniform System for Teaching Investigators in 1961, implementing a requirement of memorization of the account of the First Vision that would persist to the present day, even after Preach my Gospel in 2004 eliminated the practice of memorizing and reciting the lessons.32

Criticism and Response in Film

Twentieth-century scholarly critics of the Church such as Fawn M. Brodie, author of the influential 1945 critical biography of Joseph Smith No Man Knows my History, dismissed the First Vision briefly, saying it “may have been sheer invention, created sometime after 1830 when the need arose for a magnificent tradition to cancel out the stories of his fortune-telling and money-digging.”33 Criticism of the era focused primarily on The Book of Mormon or on allegations regarding Joseph Smith’s personal character. But in 1967, Reverend Wesley P. Walters leveled a novel attack on the historicity of the First Vision account which quickly gained attention among Church scholars and leaders. Compared to Brodie’s use of retroactive accounts accusing Smith of “lying habits,” Walter’s criticism was more in line with historical methods:

“A vision, by its inward, personal nature, does not lend itself to historical investigation. A revival is a different matter—especially one such as Joseph Smith describes—in which “great multitudes” were said to have joined the various churches involved. Such a revival does not pass from the scene without leaving some traces in the records and publications of the period. In this study we show by the contemporary records that the revival which Smith claimed occurred in 1820 did not really take place until the fall of 1824. We also show that in 1820 there was no revival in any of the churches in Palmyra and its vicinity. In short, our investigation shows that the statement of Joseph Smith, Jr., cannot be true when he claims that he was stirred by an 1820 revival to make his inquiry in the grove near his home.”34

Rather than retreat from this criticism, the Church shifted its efforts and emphases to respond head-on. Walters’s historical attack was the catalyst of a First Presidency-supported effort to “collect basic documentary material” to refute Walters’s case. This effort employed three BYU historians, including Richard L. Anderson, and “some forty scholars.”35 It was a new surge in historical attention on the First Vision—particularly on the revivalist preachers referred to in the account—from BYU scholars as well as Church leaders. Milton V. Backman, Jr. found that the years from 1800–1860 were in fact a period of constant religious revival in the northeast United States, and his and other scholars’ findings were published in the Spring 1969 issue of BYU Studies, which was entirely dedicated to the First Vision.36

It was sometime soon after (1970 or 1971) that Doug Stewart—future writer of Saturday’s Warrior—began work on the script for a screen adaption of the First Vision. The Church had been making films through the BYU Motion Picture Studio since 1953 under the supervision of former Walt Disney animator Wetzel “Judge” Whitaker, including stories from the Church’s history.37 The Lost Manuscript (1974), Whitaker’s final production before his retirement, would once again feature the appearances of the angel Moroni and the translation of the gold plates.38

A script was written for a potential First Vision film for the Church pavilion at the 1964 World’s Fair in New York, but this was passed over in favor of Man’s Search for Happiness, which would become an effective missionary tool and the Church’s most elaborate production to date. Before the production of The First Vision, Church films were sponsored by the various committees and auxiliaries of the Church and funded from their respective budgets, but The First Vision would be the first BYU Motion Picture Studio film sponsored directly by the First Presidency of the Church rather than an auxiliary.39

The writer, director, and producers of The First Vision kept several lists of preachers of the place and era that may have been included in the religious revivals mentioned in Joseph Smith’s account.40 These preachers would become characters in the film, and snippets from their teachings would be condensed and compiled in the film itself. In one scene of many that depict religious meetings of the period a preacher pounds on his pulpit shouting, “Saved or damned? Without faith it is impossible to please God, for he is the rewarder of those who diligently seek him!” A woman in the congregation descents, “We’re saved by grace, not by works!”41

An evangelical revival meeting as depicted in The First Vision: The Visitation of the Father and the Son (1976)

This is part of the pattern of criticism and response between the Church and its critics. Wesley Walters’s attack on the historicity of the religious revivals at the time of the First Vision is responded to with the depiction of revivals taking a center stage in the film several years later. The most recent film of the First Vision demonstrates this same criticism and response relationship. The First Vision is criticized today, less by historians and more by online critics, on the basis of the existence of multiple accounts from multiple sources, some of which omit certain portions or are even contradictory.42 The 2017 film produced for the Church History Museum is considering these criticisms and concerns when it includes an intro title saying:

Between 1832 and 1844, Joseph Smith and some of his closest friends recorded at least nine accounts of Joseph’s First Vision experience, given on different occasions to different audiences.

The most detailed of these accounts, written in 1838, has been published in a volume of scripture called the Pearl of Great Price.

What you are about to see draws upon all of the written First Vision accounts to provide additional perspective and insights into this remarkable event.43

This new film does not show the revival meetings at all, confining itself to the events in the grove. As critics move to new ground, the Church responds not with retreats or changes in doctrine, but in shifting its emphasis and affirming those truths under attack.

Conclusion

Grace Johnson, author of The Mormon Miracle and the 1967–2019 pageant of the same name, described the Church and its members as “…a people looking backward upon a mighty epic. All because, a boy of fourteen… went into the woods… to pray.”44 The Church has not always seen itself as the consequence of that prayer in the woods, but after almost two centuries the Church’s cultural and historical narrative regarding the First Vision and the foundation of the restoration has evolved, as all good historical narratives should. As critics have attacked the story, the Church and its members have found the First Vision to be a sturdier foundation than anyone expected.

The First Vision as depicted in The Mormon Miracle Pageant, 2015

Note: This paper was written for BYU’s The Historian’s Craft course (HIST 200) in June of 2019, though I’ve made minor revisions since then. If you want to see the videos referenced in the paper, most of them are available at the YouTube channel Hard-to-Find Mormon Videos (https://www.youtube.com/channel/UCAnRNCf5m5I0pEdqYMfGOgA). They have a lot of old stuff ranging from the profound to the unpleasant.


Endnotes:

1. The Church had, in fact, outlawed the solemnization of polygamous marriages under pressure from the United States government over 20 years previously. See Doctrine and Covenants, Official Declaration 1 (1981 Edition).

2. Randy Astle, Mormon Cinema: Origins to 1952 (New York: Mormon Arts Center, 2018), 163.

3. Astle, Mormon Cinema, 195.

4. “Amusements,” Salt Lake Tribune, Feb. 4, 1913, 9.

5. Astle, Mormon Cinema, 197.

6. Gordon B. Hinckley, “The Marvelous Foundation of Our Faith” (sermon, 172nd Semiannual General Conference of the Church of Jesus Christ of Latter-Day Saints, Salt Lake City, UT, October 2002).

7. A Uniform System for Teaching Investigators (Salt Lake City: The Church of Jesus Christ of Latter-Day Saints. 1967), 30.

8. James B. Allen, “Emergence of a Fundamental: The Expanding Role of Joseph Smith’s First Vision in Mormon Religious Thought,” Journal of Mormon History, Vol. 7 (1980): 44.

9. Allen, “Emergence of a Fundamental,” 55.

10. John H. Manwaring and George Ernest Manwaring, “George Manwaring” (unpublished biography, Rootsweb, June 27, 1902), 1; Richard L. Jensen and Richard G. Oman, C.C.A. Christensen: 1831-1912: Mormon Immigrant Artist (Salt Lake City: The Church of Jesus Christ of Latter-Day Saints, 1984), 91.

11. The First Vision: The Visitation of the Father and the Son to Joseph Smith, dir. David K. Jacobs (Provo, UT: Brigham Young University Motion Picture Studio, 1976), https://www.youtube.com/watch?v=nqq9lDUpduU; The Restoration, dir. T.C. Christensen (Provo, UT: LDS Motion Picture Studio, 2004), https://www.churchofjesuschrist.org/media-library/video/2010-07-004-the-restoration?lang=eng; Joseph Smith: Prophet of the Restoration, dir. T.C. Christensen (Provo, UT: LDS Motion Picture Studio, 2005), https://www.churchofjesuschrist.org/media-library/video/2006-05-01-joseph-smith-prophet-of-the-restoration-2002-version?lang=eng; Ask of God: Joseph Smith’s First Vision (Provo, UT: LDS Motion Picture Studio, 2017), https://www.churchofjesuschrist.org/media-library/video/2017-01-0100-ask-of-god-joseph-smiths-first-vision?lang=eng.

12. James B. Allen, “The Significance of Joseph Smith’s First Vision in Mormon Thought,” Dialogue: A Journal of Mormon Thought 1, no. 3 (Fall 1966): 33.

13. See The Book of Mormon: Another Testament of Jesus Christ, Testimony of the Prophet Joseph Smith (1981 Edition).

14. Rev. 14:6 (King James Version); Allen, “Emergence of a Fundamental,” 52.

15. The Book of Mormon: Another Testament of Jesus Christ, Mormon 8:1–5 (1981 Edition).

16. Pearl of Great Price, Joseph Smith—History (1981 Edition) 1:5–10.

17. James 1:5 (KJV)

18. Pearl of Great Price, Joseph Smith—History (1981 Edition) 1:15–19. This account includes an attempt to interfere with the prayer by “the power of some actual being from the unseen world,” interpreted to be the devil. The inclusion or exclusion of this aspect of the event in accounts and Church depictions of the events is a narrative of its own, which I’ve omitted as it falls outside the scope and constraints of this paper.

19. James B. Allen and Leonard J. Arrington, “Mormon Origins in New York: An Introductory Analysis,” BYU Studies 9, no. 3 (Spring 1969): 255–256.

20. Matthew B. Christensen, The First Vision: A Harmonization of 10 Accounts from the Sacred Grove (Springville, UT: Cedar Fort, Inc., 2014), 5–6.

21. Christensen, The First Vision: A Harmonization, 9.

22. Richard Lyman Bushman, Joseph Smith: Rough Stone Rolling: A Cultural Biography of Mormonism’s Founder (New York: Vintage Books, 2007), 39–41.

23. Allen, “Emergence of a Fundamental,” 47.

24. Benjamin Hyrum White, “A Historical Analysis of How Preach my Gospel Came to Be” (master’s thesis, Brigham Young University, 2010), 1–2.

25.Matthew Porter Wilcox, “The Resources and Results of the Radio, Publicity, and Mission Literature Committee: 1935–1942” (master’s thesis, Brigham Young University, 2013), 32.

26. Gordon B. Hinckley, Landmarks of Church History (Salt Lake City, Radio, Publicity, and Mission Literature Committee, 1936), 9.

27. White, “How Preach my Gospel Came to Be,” 3.

28. Richard L. Anderson, A Plan for Effective Missionary Work (Kaysville, UT: Inland Printing Co., 1954), 8.

29. A Systematic Program for Teaching the Gospel (Salt Lake City: Corporation of the President of the Church of Jesus Christ of Latter-Day Saints, 1955), 78–95.

30. A Systematic Program, 44–61.

31. A Uniform System, 11–13, 30–31; White, “How Preach my Gospel Came to Be,” 4.

32. Preach My Gospel (Salt Lake City: Intellectual Reserve, Inc., 2004), 19, 36–38; White, “How Preach my Gospel Came to Be,” 2–8

33. Fawn M. Brodie, No Man Knows my History: The Life of Joseph Smith the Mormon Prophet (New York: Alfred A. Knopf, 1945), 25.

34. Wesley P. Walters, B.D., “New Light on Mormon Origins from the Palmyra Revival,” Bulletin of the Evangelical Theological Society 10, no. 4 (Fall 1967): 228.

35. Samuel Alonzo Dodge, “Joseph Smith’s First Vision: Insights and Interpretations in Mormon Historiography,” in Exploring the First Vision, eds. Samuel Alonzo Dodge and Steven C. Harper (Provo, UT: Religious Studies Center, Brigham Young University, 2012), xii.

36. Milton V. Backman, Jr., “Awakenings in the Burned-over District: New Light on the Historical Setting of the First Vision,” BYU Studies 9, no. 3 (Spring 1969): 301.

37. Randy Astle and Gideon O. Burton, “A History of Mormon Cinema: The Third Wave,” BYU Studies 46, no. 2 (2007): 85.

38. The Lost Manuscript, dir. Wetzel O. Whitaker (Provo, UT: Brigham Young University Motion Picture Studio, 1974), https://www.youtube.com/watch?v=wq6rY44WWCg.

39. Randy Astle and Gideon O. Burton, “A History of Mormon Cinema: The Fourth Wave,” BYU Studies 46, no. 2 (2007): 98.

40. These Papers are scattered throughout the director’s papers collected within David Kent Jacobs Collection on Mormon Films, 1955–1988, L. Tom Perry Special Collections, Brigham Young University Library, Provo, UT.

41. The First Vision: The Visitation of the Father and the Son.

42. For a comparison of these various accounts and their respective historical contexts, see Matthew B. Christensen, The First Vision: A Harmonization of 10 Accounts from the Sacred Grove (Springville, UT: Cedar Fort, Inc., 2014).

43. Ask of God: Joseph Smith’s First Vision.

44. Grace Johnson, The Mormon Miracle (Salt Lake City: Deseret Book Company, 1952), 30.


Top Image: “This is my Beloved Son. Hear Him!” Stained glass, 1913. Currently in the Church History Museum, Salt Lake City.