Reading academic science articles (for non-academics)

I find understanding the best and brightest of human knowledge to be hugely empowering.

Given that Year 1 is all about physiological health, that means a lot of scientific articles around nutrition, sleep, movement, and more.

I thoroughly enjoy learning, understanding, and knowing, but I don't necessarily enjoy parsing dense texts so dry they make cardboard feel like an oasis spring. And even if it wasn't so boring, there are also a bunch of sandtraps on the path to rational sensemaking.

Why I am so in awe of the scientific community

First, I have to bow where reverence is due:

  • Peer-reviewed articles are the product of someone who spent years caring really hard about one specific concept, studying it diligently, and then devoting themselves to a grueling process of writing, editing, and re-working. That much focused human effort is an act of pure devotion and it can yield immeasurable value to human, non-human, and inanimate landscapes.
  • Academic literature (of repute) requires a degree of precision and specificity that's fallen out of style in most popular arenas of discourse. In doing so, it maintains a standard of intellectual excellence and rigor that I find charming.
  • Academics frequently cite each other, and build upon each other's work, to continually reach higher and further into our collective conscious. Put another way, science is an inherently collaborative endeavor. Despite the healthy competition between labs and research institutions, there is a sense of “we’re all in this together” that feels refreshing and inspiring.
  • Academia internalizes its own fallibility - scientists try their best to be right, but are open to being proven wrong and learning along the way.
  • Almost every major quality of life and health outcome achievement of the last 150 years started in a lab.

Even more interesting, beyond the basics, a lot of the science taught when I was in middle and high school is now understood to be incorrect. And there are whole new branches that didn't exist when I was a kid.

And perhaps most interesting is that we are now pointing all the brilliance of the scientific machinery to previously unimaginable domains. Did you know they're doing fMRIs to measure consciousness? Or that the heart produces a measurable electromagnetic field around the body (that some may call an aura)? Many concepts of spirituality and collective intelligence that used to be relegated to the ashrams of esoterics are now demonstrably measurable in a laboratory setting. We are in a period of rapid acceleration of human awareness and it's an exciting time to be alive.

But science is just a tool that can be used graciously ... or not

Of course, all the benefits of science can be subverted and perverted just like any other awesome and powerful tool.

This article posits a few reasons why science (in the author's case, especially social and psychology sciences) should be taken with a huge block of salt. A few others that I can add to the list from my own skeptical mindset:

  • Some science funders have profit-driven (vs. truth-driven) agendas. Think back to the studies paid for by cigarette companies or soda companies - smoking and high fructose corn syrup are just fine for us, right? Most insidious is when the scientists don't know they're being influenced, like doctors who prescribe more of a drug when they are paid by the manufacturer.
  • Some academic journals are predatory. Just because something is published doesn't mean it was published in a reputable journal with diligent editorial practices. Parallel to the proliferation of questionable content on the internet, there is an explosion of questionable, for-profit journals that prey on the "publish or perish" reality of professional science.
  • Some authors have a questionable track record. So future publications should be treated with additional skepticism.
  • Most non-academic references to scientific findings flatten nuances and overstate conclusions. That can yield a horribly misleading set of "Science says" statements that are far from accurate.

So it's complicated - but why do I care

It's unfortunately easy to point at known problems and write off science entirely. I see this across a variety of communities - spiritual / religious communities are often derided as "anti-science" but I also see symptoms of this disease in business, technology, athletic, and other communities I swim in. There is a big difference between assessing the limitations of discovery and disregarding the wealth of human knowledge.

And I totally get it - sensemaking is exhausting. Daniel Schmachtenberger has a well-reasoned (although too violently framed for my flavor) commentary on The War on Sensemaking. The skinny is that making sense of the world is time-consuming, hard, complex, and is often against the grain. It's easier to not question and just fit in.

Simply put, that doesn't work for me. I don't want to give up on scientific knowledge, but I also cannot trust something just because "Science says ...". Who is this Science anyways?

I really care about bridge building, and like the nerd that I am, I get exceptionally excited about synthesizing and distilling reputable scientific thought into a simple, coherent narrative. The more I make science accessible to non-scientists, the greater our collective awareness, the more we can marvel at the wonder and awe that is Life.

What follows is my current working model for making that happen.

A mental model called Science for Non-Academics (SNA)

From a functional standpoint, I envision the following scene: I sit down at a desk with a journal article, a bowl of tea, a pen & some scratch paper. Technically speaking, I'm probably standing, staring at a monitor, and taking notes in Roam - but hey, I've definitely got my bowl of tea! Now what?

I like the idea of generating an Algorithm of Thought (AoT) that encompasses a series of questions to guide my process. In systematizing this cognitive flow, I can ensure that every relevant article is held to the same standard of intellectual integrity. Even more important, I can compare and contrast the output of the AoT from various articles to synthesize a network of thought in a given domain.

A bonus of Roam is that I can also block or page reference between various articles and Mein Zettel. That means I can build my own internal citation library to highlight the most impactful studies, authors, and journals within my own worldview. As that awareness emerges, I naturally develop an explicit priority list for future collaborators and interviewees.

Within this SNA framework, the stool of meaning has 3 legs:

  • Why does this matter?
  • What does it say and mean?
  • Should it be believed?

I'll speak to each of these individually

Why does this academic article matter?

The only relevant information is useful information - if it's not useful, it's distracting. That's a bold assertion, and I only feel so confident saying that because I take a pretty expansive approach to "useful".

Something is "useful" to me it touches on a question I care to answer. I have a bottomless list of questions in my consciousness. Like a fractal electric sheep, every time I feel like I'm approaching answering one question, many more emerge from within a boundless spiral of curiosity and intrigue. The mystery never ceases to amaze me, and that's a feature, not a bug.

Before developing a personal knowledge management (PKM) system, I would feel crushed by the weight of all these questions and answers. Nowadays, I gleefully write them down with a [[question/Open]] or [[question/Closed]] tag in Roam and synthesize my awareness in Mein Zettel. The synthesizing step is what takes this from a PKM to a Personal Wisdom Management (PWM) system.

The specific workflow or process isn't as important as having a centralized place where one has their current list of questions to explore. Sometimes these are explicit and we can write them down. Sometimes there is only a tickle of cognition or a somatic intuition that is a flag something is coming up that is worth honoring. Any which way, when I'm looking at an academic article, I need to be able to clearly enunciate why my precious time and attention should be focused on parsing this text.

What does it say and mean?

If an article matters, then it's valuable to know what it says. This includes the full nuance of caveats and limitations based on methodology, analytic skill, confounding factors, author interpretations, and more.

It's also not enough to stop at the level of understanding the specifics of that one article. Now that I understand what that article says, I have to understand what it means in the greater body of knowledge that I'm developing in my PWM.

This means synthesizing the findings into a domain-specific internal conversation (like my Mein Zettel). Again, the specific format isn't as important as the function of externalizing from my own fallible memory. If I don't write it down, I'll drown in my own cognitive biases and lose connections and insights that could be the key to the next level of thought and awareness.

Should it be believed?

In this step, I seek to assign a Trust Score that is a dynamic calculation based on a set of known criteria.

As the library of articles I digest accumulates, the Trust Score acts as a weight that I can assign to certain concepts or findings. If something sounds really huge and is deeply important, but it comes from a questionable funding source, it is appropriate for me to discount the findings (perhaps, but not necessarily, down to 0).

The reason I want it to be dynamically calculated is that Science is a living and breathing thing. If an author has 0 retracted articles and tons of citations, I can take that as a reasonable proxy that they do relatively good work. If there is a sudden wave of scrutiny and the author's retractions spike to 15, then I want every finding posited by that author to get downgraded. One cautionary example is (in)famous food scientist Brian Wansink previously at Cornell.

I take nothing for granted, and in the ever-shifting landscape of academic knowledge, I think it's important to gut check on the truthiness of what I'm reading with consistent and measurable techniques.

Take it to (Question) 11

The primary guiding principle of this AoT workflow is that, if done correctly, I have extracted all the relevant information from the scientific article and do not need to look back to the source material again. 

The secondary guiding principle is to set up a workflow that reduces friction and facilitates flow state and ease, as much as possible.

To address points 1 and 2 above, I posit 11 distinct questions that I can ask myself:

Question 1: Is this valuable to me / my research?

  • Start with abstract as the first gate. If no, don't proceed.
  • If the abstract gate is cleared, proceed to skimming the intro, conclusion, and discussion. If the conversation seems relevant, it's worth a deeper read.

Question 2: What are the authors investigating?

  • Translate to my own words. Paraphrasing triggers a deeper level of clarity and understanding.

Question 3: How would I design an experiment to investigate this question?

  • It is critical to do this before reading how the scientists did their work. Forcing the thought on how I would design it in the absence of anchoring will be uncomfortable at first but ultimately allows greater clarity, creativity, and curiosity on where my design is dissimilar from the author's choices.
  • With my limited (but growing) knowledge base, what tests would I run, and how I would set it up? (assume I have all necessary resources and equipment)
  • What are the pros and cons of this approach?
  • What are the inherent assumptions and limitations of this approach? Is there a bias built into this testing mechanism?

Note: in the fields I care about, I plan to create a master list of experimental designs and techniques. That way, I can short-circuit this step by block referencing to that design method including all of the known pros, cons, considerations, and limitations.

Question 4: Based on my current awareness, what would I expect to find?

  • From past reading and research, what is my educated guess for what the methods I described in Q3 will yield?
  • What sources or information am I drawing upon to generate this hypothesis? In this section, it will be appropriate to reference findings from other studies I've reviewed to highlight why I expect something to be true.

Question 5: What did the authors actually find?

  • Paraphrase key findings into plain English in a few bullets or sentences. This will likely come from the results and discussion section of the paper.
  • Is this aligned with what I expected? If not, what are the nuances or other considerations that created the gap?

Question 6: How did the authors run the experiment?

  • Quick bullets for the tools, tests, mechanisms, protocols, analysis techniques, and other standards they utilized. This will most likely come out of the methods section.
  • In this section, it's important to include the context within which the methods were applied. A sample size of 2 individuals is very different than 2000 individuals and that context should be noted for future reference.
  • Is this aligned with how I would have structured the investigation? If not, why did they choose their method? Is it better than my own, or does it accommodate limitations or considerations I did not realize in my theoretical construct?

Question 7: How does this fit into my larger body of knowledge and narrative?

  • Add key findings to Mein Zettel page and document links, gaps, and other relations as necessary. This is the act of synthesis that is so important (and so often lacking) to evolve a PKM system into a PWM system.

Question 8: What other considerations or discussion points did the authors bring up?

  • Anything worth parsing into my open questions list?

Question 9: What sources or references from this article should I review?

  • Add to my queue for review, including what attracted me to that reference paper in the first place. Bonus points for tagging out the queue so I can filter down to topics or concepts that are particularly resonant in any given moment.

Question 10: Do I buy it?

  • With the full force of my boundless human intuition, beyond the mechanics of the practical pieces answered above, do I buy the conclusions? What does my deeper wisdom suggest about the findings?
  • If not, can I put in words why not?

Question 11: What more do I want to learn or investigate knowing what I know now?

  • Add to my list of open questions for further review and analysis.

Truth Score

If I believe that an article is worth my attention, then it's also worth assigning a Truth Score. Each question below is answered with a 0-10 scale where 0 is "would not trust it with a rusty can" and 10 is "trust with the utmost and highest confidence".

The overarching Truth Score for the article is a simple average for each of the questions below.

Are the authors, and especially the principal investigator (PI), credible? 0 - 10

  • This can be very hard to validate or quantify. It is usually a set of proxies relating to the institution where they work, the number of citations they receive, the journals where they publish, the grants they receive, and any other honors within their field.
  • That said, there is a known bias within academia towards older white men. Simply put, white men have a higher probability of getting into more prestigious institutions, being cited more often, getting into more well-regarded journals, getting bigger / more prestigious grants, and receiving other honors.
  • I suspect this will feel shaky in the beginning and will stabilize over time as I have more exposure to various authors, writing styles, and strengthen my own analytical chops. Also, this is the Truth Score variable that I expect to be most volatile.

Is the journal credible? 0 - 10

Does the funder have a non-truth agenda or conflict of interest? 0 - 10

Is the article asking a valid question? 0 - 10

Did the authors present a coherent and believable hypothesis? 0 - 10

Is the experiment design the best it could possibly be given known or expressed constraints? 0 - 10

Was the data collected from the experiment assessed using the best-known analytical techniques? 0 - 10

Is the analysis communicated in an honest and intelligible way? 0 - 10

Do the conclusions follow from the methods and the analysis? 0 - 10

Are the conclusions appropriately acknowledged within the larger sphere of awareness in this topic? Do the authors do a good job acknowledging supporting AND contradicting evidence for their conclusion? 0 - 10

SNA sounds intense and time-consuming - what gives?

It's certainly possible to skim academic articles and highlight whatever sounds right, but that feels like it leaves too much room for error and misinterpretation. Sadly, this would only accelerate and entrench our confirmation biases around truthiness.

And it's also possible to do slightly more deliberate reading with the SQ3R or KWL techniques, but those focus on the article proper without putting the article into the larger context of (appropriate) skepticism and (synthesized) personal knowledge.

I readily admit that the Q11 + Truth Score AoT may be overkill, and the only way to know is to put it through the paces. My personal hypothesis is that, like any new skill, it'll feel complicated and overwhelming with a blank page but as I develop a library of methods and an awareness for the basics of relevant fields it’ll get easier over time.

Practicing this new skill of AoT sensemaking in science is also complicated and confounded by the difficulty of entering a new domain of thought. In addition to a new workflow, I'll also be learning a ton about experimental designs, statistical treatments, reputational markers, and other nuanced considerations that are domain-specific and have nothing to do with the Q11 workflow itself.

On the plus side, publishing my notes should be able to shorten that on-ramp and learning curve for others. Hopefully, my bridge-building can make your own journey that much easier, shorter, and more enjoyable.

Roam Templates

In the Roam file below I've included my native Roam template.

I warmly welcome any thoughts, criticism, feedback, or anything else from academics and non-academics alike. And if you want to practice this skill with me, reach out (henry @ this URL), and let's see what a multi-player version of this flow looks like.


For my fellow Roamans, here is the JSON file for import.

This file includes my Roam native templates. Once I invest myself into grokking Roam42's SmartBlocks, I'll level up and include those here as well.

Like what you see? Join my Thriving Thursdays newsletter

Every week you'll receive fascinating finds and synthesized sensemaking in  physical, intellectual, emotional, and relational arenas.