Information

Is there a bias where long statements, books, articles, etc. are seen as more truthful?

Is there a bias where long statements, books, articles, etc. are seen as more truthful?



We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

Is there a cognitive biasing effect that makes people believe that long statements / expressions / articles / books are more truthful than short ones (assuming that they are equal semantically)?


Message length is a peripheral cue in the elaboration likelihood model. This means that a message's length affects the likelihood that its recipient will be persuaded when the recipient is not scrutinizing the message's content attentively. When a message is evaluated through peripheral attention instead of central focus, simple heuristics that are easily fooled determine the extent of persuasion that results.

Petty and Cacioppo (1986, part 1) cite their earlier study (Petty & Cacioppo, 1984) as demonstrating that more arguments in favor of a message increases attitude positivity toward the message when the audience is not personally invested in understanding it even if the arguments are weak. This doesn't work when the audience is highly involved - then a greater number of weak arguments only makes attitudes worse.

Petty and Cacioppo (1986, part 2) also cite Wood, Kallgren, and Preisler's (1985) study, which persuaded its relatively inattentive participants more strongly with long messages than short messages. Again, this effect did not vary with message quality, but disappeared among participants who paid closer attention.

@Eoin recommended Heit and Rotello (2012), who replicated the results of Wood and colleagues. In fact, they even found the effect of message length among participants who were told not to judge arguments by length. This also occurred when listening to verbal arguments. Good stuff…

References
- Heit, E., & Rotello, C. M. (2012). The pervasive effects of argument length on inductive reasoning. Thinking & Reasoning, 18(3), 244-277. Retrieved from http://faculty.ucmerced.edu/sites/default/files/eheit/files/argument%202012.pdf.
- Petty, R. E., & Cacioppo, J. T. (1984). The effects of involvement on responses to argument quantity and quality: Central and peripheral routes to persuasion. Journal of Personality and Social Psychology, 46(1), 69-81. Retrieved from http://psychology.uchicago.edu/people/faculty/cacioppo/jtcreprints/pc84a.pdf.
- Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology, 19, 123-205. Retrieved from http://psychology.uchicago.edu/people/faculty/cacioppo/jtcreprints/pc86.part1.pdf (part 1), http://psychology.uchicago.edu/people/faculty/cacioppo/jtcreprints/pc86.part2.pdf (part 2), and http://psychology.uchicago.edu/people/faculty/cacioppo/jtcreprints/pc86.part3.pdf (part 3).
- Wood, W., Kallgren, C. A., & Preisler, R. M. (1985). Access to attitude-relevant information in memory as a determinant of persuasion: The role of message attributes. Journal of Experimental Social Psychology, 21(1), 73-85. Retrieved from http://goo.gl/nZgcNz.


It seems like a casual version of the illusion of validity, but the illusion of validity is a more general bias that additional data generates additional validity. It's often used more in a lab setting then a debate setting, where an additional experiment may be included to lend support to a hypothesis, but the experimental outcome isn't actually surprising given the results of other experiments.


How to Recognize Bias in a Newspaper Article

This article was co-authored by Christopher Taylor, PhD. Christopher Taylor is an Adjunct Assistant Professor of English at Austin Community College in Texas. He received his PhD in English Literature and Medieval Studies from the University of Texas at Austin in 2014.

There are 16 references cited in this article, which can be found at the bottom of the page.

wikiHow marks an article as reader-approved once it receives enough positive feedback. In this case, 85% of readers who voted found the article helpful, earning it our reader-approved status.

This article has been viewed 373,474 times.

With all the information that's out there these days, it's important to be able to recognize bias in the news. If a newspaper article is biased, this means that an unfair preference for someone or something affected the way the reporter wrote the piece. The reporter might favor one side of a debate or a particular politician, and this could cloud the reporting. Sometimes, reporters don't even mean to be biased they may do it by accident or because they didn't do enough research. To wade through this kind of reporting, you'll need to read very carefully, and you may even need to do your own research.


Community Reviews

Wonderful introduction to meta-science. I&aposve been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?)

Science has gotten a bit fucked up. But at least we know about it, and at least it&aposs the one institution that has a means and a track record of unfucking itself.

R Wonderful introduction to meta-science. I've been obsessively tracking bad science since I was a teen, and I still learned loads of new examples. (Remember that time NASA falsely declared the discovery of an unprecedented lifeform? Remember that time the best university in Sweden completely cleared their murderously fraudulent surgeon?)

Science has gotten a bit fucked up. But at least we know about it, and at least it's the one institution that has a means and a track record of unfucking itself.

Ritchie is a master at handling controversy, at producing satisfying syntheses - he has the unusual ability to take the valid points from opposing factions. So he'll happily concede that "science is a social construct" - in the solid, trivial sense that we all should concede it is. He'll hear out someone's proposal to intentionally bring political bias into science, and simply note that, while it's well-intentioned, we have less counterproductive options.

Don't get the audiobook: Ritchie is describing a complex system of interlocking failures. I need diagrams for that sort of thing.

Ritchie is fair, funny, and actually understands the technical details. Supercedes my previous fave pop-meta-scientist, Ben Goldacre. . more

This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompet This is an important topic, and the author does an excellent job explaining problems like p-hacking. But these issues are nothing new to scientists, so the main value of this book is if it engages and clearly explains things for the general public. And there, I’m afraid the author may end up just increasing confusion by trying to turn everyone into a scientist. In terms of solutions to bad science, I wonder if we don’t need to start by addressing the underlying culture of corruption and incompetence, of which bad science is just one symptom. Detroit: An American Autopsy

Nerd addendum:
With nutritional research, for example, he makes a good point that the news media do a bad job of hyping all these small or shoddy or irrelevant studies. His immediate solution is to teach us all how to read a scientific paper, and then whenever you hear about an interesting study in the news, you should go and somehow (even illegally) get a copy of the study and analyze it for validity. That seems nuts and unfair. According to the book, doctors and scientists and editors of scientific journals are widely incapable of this, so how is every citizen going to master this skill? And why should you?

I think if people (scientists, doctors or otherwise) are really interested in nutritional epidemiology, they should go deep and read Gary Taubes, e.g. That gives you an understanding of the research literature going back decades, explaining what is wrong with the original studies that are often cited, and giving the implications in plain language. Then if you want, you can look up a few of the studies that he has detailed and you’ll be able to know what to look for and to verify whether they say what he says they say. You have to know stuff to learn stuff.

What matters is not the latest news item, but the overall weight of the best available evidence.

Another problem with his commentary on nutritional epidemiology is that he goes on from there to warn in general about all observational epidemiology, without pointing to when observational epidemiology does supply robust actionable evidence (trans fats, lung cancer, SIDS, etc., etc.).

In 1945, Robert Merton wrote:

In 1945, Robert Merton wrote:

In 2020, the sociology of science is stuck more or less in the same place. I am being unfair to Ritchie (who is a Merton fanboy), because he has not set out to write a systematic account of scientific production—he has set out to present a series of captivating anecdotes, and in those terms he has succeeded admirably. And yet, in the age of progress studies surely one is allowed to hope for more.

If you've never heard of Daryl Bem, Brian Wansink, Andrew Wakefield, John Ioannidis, or Elisabeth Bik, then this book is an excellent introduction to the scientific misconduct that is plaguing our universities. The stories will blow your mind. For example you'll learn about Paolo Macchiarini, who left a trail of dead patients, published fake research saying he healed them, and was then protected by his university and the journal Nature for years. However, if you have been following the replication crisis, you will find nothing new here. The incidents are well-known, and the analysis Ritchie adds on top of them is limited in ambition.

The book begins with a quick summary of how science funding and research work, and a short chapter on the replication crisis. After that we get to the juicy bits as Ritchie describes exactly how all this bad research is produced. He starts with outright fraud, and then moves onto the gray areas of bias, negligence, and hype: it's an engaging and often funny catalogue of misdeeds and misaligned incentives. The final two chapters address the causes behind these problems, and how to fix them.

The biggest weakness is that the vast majority of the incidents presented (with the notable exception of the Stanford prison experiment) occurred in the last 20 years or so. And Ritchie's analysis of the causes behind these failures also depends on recent developments: his main argument is that intense competition and pressure to publish large quantities of papers is harming their quality.

By only focusing on recent examples, Ritchie gives the impression that the problem is new. But that's not really the case. One can go back to the 60s and 70s and find people railing against low standards, underpowered studies, lack of theory, publication bias, and so on. Imre Lakatos, in an amusing series of lectures at the London School of Economics in 1973, said that "the social sciences are on a par with astrology, it is no use beating about the bush."

Let's play a little game. Go to the Journal of Personality and Social Psychology (one of the top social psych journals) and look up a few random papers from the 60s. Are you going to find rigorous, replicable science from a mythical era when valiant scientists followed Mertonian norms and were not incentivized to spew out dozens of mediocre papers every year? No, you're going to find exactly the same p<.05, tiny N, interaction effect, atheoretical bullshit. The only difference being the questionable virtue of low productivity.

If the problem isn't new, then we can't look for the causes in recent developments. If Ritchie had moved beyond "loose generalities" to a more systematic analysis of scientific production I think he would have presented a very different picture. The proposals at the end mostly consist of solutions that are supposed to originate from within the academy. But they've had more than half a century to do that—it feels a bit naive to think that this time it's different.

Finally, is there light at the end of the tunnel?

Again, the book is missing hard data and analysis. I used to share his view (surely after all the publicity of the replication crisis, all the open science initiatives, all the "intense soul searching", surely things must change!) but I have now seen some data which makes me lean in the opposite direction.

Ritchie's view of science is almost romantic: he goes on about the "nobility" of research and the virtues of Mertonian norms. But the question of how conditions, incentives, competition, and even the Mertonian norms themselves actually affect scientific production is an empirical matter that can and should be investigated systematically. It is time to move beyond "speculative insights" and onto "rigorous testing", exactly in the way that Merton failed to do. . more

First of all the title slaps, this is the kind of word play you want in a popular science book title.

Ritchie grabs your attention with some spicy cases of scientific fraud, but follows up with other pernicious problems that lead science astray. He goes on to suggest changes to the way research is conducted, funded, reviewed and published to right some of these wrongs. A worthwhile read (or listen) for researchers or mere muggles like myself.

I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It&aposs a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it&aposs a humbling reminder of what we&aposre doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime.
The book is exceptionally well structured, very clear writing, very I highly recommend this book for anyone planning (or considering) to do science, either a bachelors, masters or more. It's a great overview of how science is actually practiced, and how it can so easily go wrong. I also recommend this to current scientists, because it's a humbling reminder of what we're doing wrong, and also a quick update on things we might have been taught as facts has actually been disproven in the meantime.
The book is exceptionally well structured, very clear writing, very engaging, switching between as much information as needed to understand a given concept, then compelling examples, and discussion as to why it matters, what people might object to, etc. Really really good.
However, the author fails to give proper due to the main strength of science: it's ability to self-correct. This book is described as an "expose", but in reality all of what he mentions has been known for decades, and in fact every single example he gives of fraud, negligence, bias, or unwarranted hype was not uncovered by external journalists but rather other scientists. It was the peers who read papers that looked suspicious and did more digging, or whole careers built around developing software and tools for automatically detecting plagiarism, statistical errors, etc. It was psychology itself that "wrote the book" on bias that was fundamental to exposing the biases of scientists themselves. And more often than not, it was just a future study that tried something better that should have worked but didn't that disproved a flimsy hypothesis. Sure fraud, hype, bias, and negligence are dragging science down, but science isn't "broken", it's just inefficient. Wasting a lot of money on bad experiments and scientists needs to be avoided, but in the end, a better truth tends to bubble up regardless. Anyone who has had to defend science against religious diehards will be particularly aware of this.

Also missing is proper consideration as to why these seemingly blindingly obvious problems have been going on for so long. As an insider, here are some of my answers:
- All this p-hacking (trying different analyses until something is significant). Scientists are not neatly divided into those that immediately find their results because of how fantastically well they planned their study, and those that desperately try to make their rotten data significant. Every. Single. Study. has to fine tune their analysis once they get the data, not before. Unless you are in fact replicating something, you have absolutely no idea what the data will look like, and what's the most meaningful way to look at it. This means you can’t just tell scientists "stop p-hacking!", you need an approach that acknowledges this critical step. Fortunately, an honest one exists that can be borrowed from machine learning: splitting your data into a "training" and "testing" dataset, where you fine-tune your analysis pipeline on a small subset, then you actually rely on the results applied to a larger one, using only and exactly the pipeline you previously developed, without further tweaking.
- The file drawer problem (null results not getting published). I think especially in the field of psychology, statistics courses are to blame for this we don't reeeally understand how the stats work, so we rely on Important Things To Remember that we're taught by statisticians, and one of these is that "you can't prove a null hypothesis". This ends up getting interpreted in practice in "null results are not real results, because nothing was proven". We are actively discouraged from interpreting "absence of evidence as evidence of absence", but sometimes that is in fact exactly what we should be doing for sure not with the same confidence and in the same way with which we interpret statistically significant positive results, but at some point, a study that should have found something but didn't is a meaningful indication that that thing might not in fact be there. A useful tool to help break through this narrow-minded focus on only positive results is a new statistical tool called similarity testing, where you test not whether two groups are different but whether they are statistically significantly "the same". This is a huge shift in mindset for many psychologists, who suddenly learn that you can in fact have a legitimate result that there was no difference to be found. Knowing this I suspect will make people less wary of null results in general.
- Proper randomization (and generally the practicalities of data collection). The author at some point calls it a mistake that a trial on the Mediterranean Diet had assigned to the same family unit the same diet, thus breaking the randomization. For the love of God, does he not know how families work? You cannot honestly ask members of the same family to eat differently! Sure, the authors should have implemented proper statistical corrections for this, but sometimes you have to design experiments for reality, not a spherical world.
- Reviewers nudging authors to cite them. This may seem like a form of blatant self-promotion, but it's worth mentioning that in reality, the peer reviewers were specifically selected as members of the EXACT SAME FIELD, and so odds are good that they have in fact published relevant work, and odds are even better that they are familiar with it enough to recommend it. That is not to say that some of it is for racking up citations, but this is not true unless proven otherwise, because legitimate alternative explanations exist.

Other little detail not mentioned by the author is that good science is f*cking hard. For my current experiment, I need a passing understanding of electrical engineering to run the recording equipment, a basic understanding of signal processing and matrix mathematics to clean and analyze the data, a good understanding of psychology for experimental design, a deep understanding of neuroscience for the actual field I'm experimenting in, a solid grasp of statistics, sufficient English writing skills, separate coding skills for both experimental tasks and data analysis in two different languages, and suddenly a passing understanding of hospital-grade hygiene practices to deal with COVID! There's just SO MUCH that can go wrong, and a failure at any point is going to ruin everything else. It's exhausting to juggle all that, and honestly, it's amazing that we have any valid results coming out at all. The only real solution to this is to have larger teams focus less on individual achievements. The more eyes you have on scripts, the fewer bugs there will be the more assistants available to collect data, the fewer mishaps. The more people reading the paper beforehand, the fewer mistakes slip through. We need publications from labs, not author lists it can be specified somewhere the exact contribution of each, but science needs to move away from this model of venerating the individual, because this is not the 19th century anymore: the best science comes from groups. On CVs, we shouldn’t write lists of publications, we should write project descriptions (and cite the paper as “further reading”, not as an end in and of itself).

Scientists need the wakeup call from this book. Journalists and interested laymen also greatly benefit from understanding why a healthy dose of scepticism is needed towards any single scientific result, and how scientists are humans too. But the take-home message that can transpire from this book and is not actually true, is that scientists are either incompetent or dishonest or both. The author repeatedly bashes poor science and science communication that has eroded public trust in science, but ironically this book is essentially highlighting this with neon letters and making sure trust in science is eroded. To some extent it is warranted, but the author could have done more to defend the institution where it is deserved, and as an insider, could have done more to talk about the realities an individual scientist faces when they make these poor decisions. It's worth mentioning that science has not gotten worse, we're still making discoveries, still disproving our colleagues, and still improving quality of life. We could just be doing it more efficiently. . more

"The books we’ve just discussed were by professors at Stanford, Yale and Berkeley, respectively. If even top-ranking scientists don’t mind exaggerating the evidence, why should anyone else?"

Following Kahneman, we have similar claimed by NASA, pop science books like Why We Sleep, studies of austerity, mediterranean diets, publication biases and issues of hacking p-values, cherry picking, salami slicing, self citations, self plagiarism, creating ghost citations and review from ghost peers, coercive citations from accepting journals.

Most of the people who's already in the field would know most of the replication crises discussed in the book but I guess mostly their guides would have calmed them down that it's okay to not being able to replicate scientific study due human error among other factors. It's a conditioning that's been practiced contradictory of the objectives set by the founding figures of scientific publishing community like Boyle.

Afterall the practitioners of science in the end has the susceptibility of becoming more of an organized cult working for their incentives of various kinds from academic survival, personal fame and status to achieve the bureaucratic standards forgetting the basic tenets of what scientific research is all about.

The last book I read was a work of a Wittgenstein student showcasing how Social Science was massacred by Social Scientists (Sociologists, Social Psychologists, Economists, Political Scientists to name a few..) where as this one does the same in the Natural Sciences.

But rather not going philosophical, it's limited to how science is practiced today than what science actually is. Maybe there are no better methods to understanding the world but as Winch said it's better to stay vigil and question 'the extra scientific pretensions' of scientific communities which creates its own norms and beliefs in its culture of practicing Science.

Science Fictions: The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020)

Extremely informative and well argued. I would suggest it to anyone who has any contact with science in their daily life (so. everyone). I loved the examples and statistics and that it&aposs at the same time really approachable. For the layperson, it&aposs pretty shocking to hear how null results and replication studies have been treated by even reputable journals.

There are a bunch of solutions offered at the end.

The only downside is that if an aspiring scientist were to read this book, they might t Extremely informative and well argued. I would suggest it to anyone who has any contact with science in their daily life (so. everyone). I loved the examples and statistics and that it's at the same time really approachable. For the layperson, it's pretty shocking to hear how null results and replication studies have been treated by even reputable journals.

There are a bunch of solutions offered at the end.

The only downside is that if an aspiring scientist were to read this book, they might throw in the towel before they even start. It sort of implies that it is basically impossible to do anything worthwhile in the sort of sets of study participants that junior level scientists have access to (alas, the dreaded P-value). Not being able to use your own ideas before you get that million-dollar grant after years of being small cog may discourage some. It's not the romantic ideal. But I guess it's for the best.

***
“Science, the discipline in which we should find the harshest scepticism, the most pin-sharp rationality and the hardest-headed empiricism, has become home to a dizzying array of incompetence, delusion, lies and self-deception.” . more

Science Fictions : The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) by Stuart Ritchie is an excellent book that looks at the many problems in science and what can be done to improve the situation. Ritchie is a Psychologist at King’s College London.

Science Fictions goes through how science currently works and then details the replication crisis, where the replication of studies, particularly in psychology but also in other fields demonstrated serious problems with science as it Science Fictions : The Epidemic of Fraud, Bias, Negligence and Hype in Science (2020) by Stuart Ritchie is an excellent book that looks at the many problems in science and what can be done to improve the situation. Ritchie is a Psychologist at King’s College London.

Science Fictions goes through how science currently works and then details the replication crisis, where the replication of studies, particularly in psychology but also in other fields demonstrated serious problems with science as it stands.

The problems of outright fraud, bias, negligence and hype are then examined. P-hacking, dropping studies with null results, self-citing and chronic hype are all well described. The book has many examples of these problems.

Ritchie also describes the Mertonian Norms that science should seek to uphold, those of universalism, disinterestedness, organised skepticism and communalism of sharing results.

The book gets into why scientists engage in dubious activities, namely that they want to succeed and often they believe that their hypothesis is true, it just needs a bit of help. This is Noble Cause Corruption but Ritchie doesn’t use the term. The push to publish and increase scientists h-factor and for journals to up their impact factor is also outlined.

Ritchie also describes how science can be improved. Automated checking for statistical errors, pre-registration, open data, publication in free access journals and credit being given for replication and for well obtained null results can also help. Ritchie also points out that science has had lots of success recently and even with the current problems it still achieves incredible things.

Science Fictions is a fine book that is well thought through, well written and fun to read. It would be very hard not learn something from reading it. . more

Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed.

He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news.

This book has been wel Really enjoyed this description of some of the big problems in science today. This is not in any way an anti-science book, Ritchie makes clear that he wants to improve science, not to dispense with it. Along with describing problems he also describes much of the process of science which I enjoyed.

He spends a lot of time on the reproducibility crises, p-hacking and other statistical cheating, and many other issues that one hears about when science problems get in the news.

This book has been well reviewed in general publications, but I was curious how science journals would review it. The only review I found in a professional science periodical (in the ultra-prestigious Nature) was basically positive with a few criticisms.
. more

As other reviewers have said: sober, balanced, hopeful cataloguing of perverse incentives in present-day science, plus some potential fixes. A highlight was learning about this actual paper, accepted for publication in the &aposInternational Journal of Advanced Computer Technology&apos.

What stood out for me was how superbly well-written this was. It&aposs friendly and funny and crystal clear. Infuriating for anyone who&aposs tried and failed hard to find that voice in their own writing! As other reviewers have said: sober, balanced, hopeful cataloguing of perverse incentives in present-day science, plus some potential fixes. A highlight was learning about this actual paper, accepted for publication in the 'International Journal of Advanced Computer Technology'.

What stood out for me was how superbly well-written this was. It's friendly and funny and crystal clear. Infuriating for anyone who's tried and failed hard to find that voice in their own writing! . more

Incredible book that I binged in a day. As an influencer who often references psychological studies but also knows how much bad science is out there, I’m always trying to learn more about this subject.

This author did a great job not just giving examples of bad science, but he explains WHY it’s happening and offers solutions. Absolutely loved this book and hope some journalists read it as well before they keep reporting on hyped up science.

Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them.

Ritchie writes clearly. He&aposs likeable and scientifically and statistically literate, but doesn&apost take himself too seriously. He&aposs a great science populariser even when he is denigrating science!

Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purpo Essential reading for graduate science researchers. Although, much of the material will hopefully be familiar to them.

Ritchie writes clearly. He's likeable and scientifically and statistically literate, but doesn't take himself too seriously. He's a great science populariser even when he is denigrating science!

Ritchie helped kick off the well-publicised replication crisis in social science in 2012 when he attempted and failed to replicate a para-psychology paper. The original paper by Bem purported to show that we can study for a test after we have taken the test to improve our test results.

Obvious nonsense, right? No surprises it failed to replicate.

The major problem, as the original authors noted, is that their methods weren't all that different to many of the papers being published in social science.

Ritchie does a nice job explaining to lay-audience concepts like p-value, statistical significance, and the common dodgy statistical methods such as p-hacking and harking. He also outlines how the issues are exacerbated by perverse incentives in academia such as publish or perish, and the need for results to be statistically significant and sexy.

Ritchie also recounts some good narrative non-fiction around some of the most high-profile cases of fraud such as Diederik Stapel (the Bernie Madoff of science) and Paolo Machiarinni (who claimed he was healing people with risky procedures - as opposed to killing them!).

Twitter user Alvaro De Menard is less optimistic than Ritchie. De Menard points out, with a systematic deep dive into social science paper's replicability, that this isn't a recent phenomenon. Any proposed way forward to fix the crisis within academia is unlikely to succeed.

A sober analysis of the ways in which the current scientific institution incentivizes poor research practices. Useful for graduate students and the general public alike, the book catalogues the systemic flaws which result in unreliable results.

Most importantly, you will come away knowing what you always suspected: Nutritional science rests on a firm foundation of bull****.

While the book will most likely be received as an indictment of science, it should actually inspire awe and optimism. For i A sober analysis of the ways in which the current scientific institution incentivizes poor research practices. Useful for graduate students and the general public alike, the book catalogues the systemic flaws which result in unreliable results.

Most importantly, you will come away knowing what you always suspected: Nutritional science rests on a firm foundation of bull****.

While the book will most likely be received as an indictment of science, it should actually inspire awe and optimism. For in leveling criticism against current malpractice, Ritchie has reminded us of the capability of our scientific tradition to error-correct to be criticized using its own tools and admit its own faults. Indeed, much of Ritchie’s analysis itself relies on scientific studies which shine light on all the (admittedly, often exceptionally fucked up …. Looking at you Paolo) ways in which the current system is sub-optimal. Contrast this with many other traditions, whether religious or political, which don’t have the ability for, or actively discourage, such self-awareness. Typical religious traditions explicitly denounce such self-criticism as heresy, for example. The fact that Ritchie can publish this book without fear of being ostracized (or stoned, or hung, or burned, or imprisoned, or eaten by a whale …) and probably contribute to bettering scientific practice should be cause for celebration. We’ve done it: we’ve created a perpetual progress machine.

Ritchie ends by giving some suggestions for how to improve the reliability of scientific findings. These include:

Multiverse analysis: looking at the average effect over many (possibly all valid) methodological choices

Pre-registration: committing to looking a single effect, ideally using a single type of analysis

Pre-print grading system: all papers are first distributed as pre-prints. They are then given a grade by independent investigatory groups. Journals then select the papers they’d like to publish

Data integrity tests by journals: requiring that journals submit their papers to automated tests which pinpoint errors in the analysis

Enforced Transparency: Requiring that all data, analysis, and initial drafts of a paper be made available for scrutiny.

All-in-all, an excellent read. . more

Goodharts&aposs law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system.

As someone who has published scientific research, peer reviewed others and Goodharts's law states "When a measure becomes a target, it ceases to be a good measure", which sums up the premise of the book perfectly. For centuries, science has tried to give value to subjective knowledge and academia relies on these often arbitrary metrics. But all we have done is create a system which can be gamed, and populated that system with clever (mostly) people who are heavily incentivized to game the system.

As someone who has published scientific research, peer reviewed others and worked for various funders, much of the author's criticisms of 'science' and how we incentivise it hits home. Nothing in this book was particularly surprising or new to me, but I had never considered the extent the combined effect of all these individual imperfections in our system are having on the quality of the science being produced (and how much time and money is being spent on nothing more than maybe furthering someone's career).

The most fascinating part of the book was the reference to the field of meta-science (the science of science), which has started to quantity just how bad malpractice in science is and analyze aspects of the funder-scientist-journal relationships.

This will be an uncomfortable read for many in the world of science or anyone who has advocated the finding of a popular scientist book to their friends. However, it is essential reading and hopefully will help us all get closer to that elusive concept of 'truth'. . more

This book reminds me of another book I read, the color of law. Like the Color of Law, this is a book that is very informative and will transform the way you think about the topic discussed in this book, but it lacks a few key ingredients that prevent it from being a 5/5 read.

This book starts out by introducing you to how science works and how that science is publicized.

1. A scientist comes up with a scientific theory/question to look into and comes up with an hypothesis.

2. The scientist attempts This book reminds me of another book I read, the color of law. Like the Color of Law, this is a book that is very informative and will transform the way you think about the topic discussed in this book, but it lacks a few key ingredients that prevent it from being a 5/5 read.

This book starts out by introducing you to how science works and how that science is publicized.

1. A scientist comes up with a scientific theory/question to look into and comes up with an hypothesis.

2. The scientist attempts to get a grant for researcher into said theory/question

3. The scientist conducts experiments and collects data

4. The scientist try's to get his findings published via peer review.

The book proceeds to break down why the current process of peer review is flawed, and this largely stems from the fact that scientist don't replicate studies to ensure they are accurate.

As a result of this, various "scholarly resources" are utilized by doctors, psychologist, and scientist without them realizing this, and people have died because of this dilemma.

Scientist often engage in upright fraud, negligence, or hype in order to get a particular scientific discovery published. This is because they have perverse incentives to do so. sadly world renowned scientific publishing companies like Nature and Science display a preference towards scientific articles that are flashy, new, and eye popping. It's very hard to produce an article/paper that's new/positive/eye popping naturally, so scientist skew their results.

This book also provides potential solutions for resolving these issues such as requiring submitted papers/articles to be assessed by algorithms that are programmed to spot fake data in scientific papers.

But yeah in terms of how informative this paper was. as you can see it's very informative and changes your perspective on scholarly databases. However, as previously mentioned, this book was missing that thing that would put it over the top for me.

A. This book wasn't an entertaining read at all. The information in this book was great. but the book itself was kind of boring.

B. The book was repetitive it would use endless examples to repeat or overly explain a concept. . more

Science is the new god, a deity that has gotten its power too fast and too much, it has behaved the same way a human wouldit became power drunk. The hostile takeover it did on religion as the preeminent field of knowledge that guides human affairs left it feeling smug and thought itself immune to the same pitfalls that had befallen religion. The end result? The current state of an epidemic of fraud, negligence, bias and hype.

Stuart Ritchie took us across the current landscape of science as prac Science is the new god, a deity that has gotten its power too fast and too much, it has behaved the same way a human wouldit became power drunk. The hostile takeover it did on religion as the preeminent field of knowledge that guides human affairs left it feeling smug and thought itself immune to the same pitfalls that had befallen religion. The end result? The current state of an epidemic of fraud, negligence, bias and hype.

Stuart Ritchie took us across the current landscape of science as practised and not as it should be done. We heard stories of numerous researchers who manufactured data out thin air, others who tortured the numbers until they told them what they want to hear and some who know the art of mixing a perfect cocktail of words to take mild effects stratospheric. It was a fascinating tale. This book is one of those current books which are actually worth it rather than the common practice of taking one good Atlantic article and padding it enough with hot air to pass off as a book. Hats off to you Stuart, you did a splendid job in writing this.

Still, the current state of science is not an indictment to science itself but rather the human incentives guiding science. The publish or perish rat race, limited funds, corporate interests and humans with suspect morals taint an otherwise noble field. Ritchie had a couple of worthy recommendations which if implemented might save science some face but I won't be too optimistic, these ideas have been floating around for a while but very little gets done. I think the public should read this book and develop a healthy skepticism whenever they hear sweeping declarations made by so called researchers on the "cutting edge." . more

This was a good read, in the sense that it clarified the problems that science faces (at least, for me). Ritchie does a good job of bringing in a lot of material to tell the story of the troubling findings of recent years. Ritchie focuses mostly on psychology, his area of expertise, and so while I found his thoughts very interesting for psychological and medical research, I did wonder about how some of it might translate to mathematics or physics. Reproducibility does not appear to be as much of This was a good read, in the sense that it clarified the problems that science faces (at least, for me). Ritchie does a good job of bringing in a lot of material to tell the story of the troubling findings of recent years. Ritchie focuses mostly on psychology, his area of expertise, and so while I found his thoughts very interesting for psychological and medical research, I did wonder about how some of it might translate to mathematics or physics. Reproducibility does not appear to be as much of a problem for the natural sciences, and so I appreciate Ritchie's thoughts on this issue as a reader of general science.

The part that really fits well for all of science is the incentives structure of science. That is, the publish-or-perish paradigm that seems to be dominant right now. The author does an excellent job of explaining the perverse incentives and offers some ideas to change the incentive structure to value replication, opennness, and transparency. I think this is the most useful part of the book, even if the other parts are necessary to explain the scope of the problem.

If you are interested in science news, this is an excellent resource to get an understanding of what to look for when deciding whether a new result is likely to hold in the long run. It also does a good job of explaining what openness means for science. It's also realtively short. From a stylistic perspective, I thought the writing style was fine (I did not think it was excellent, but neither did I think it was below average), and I enjoyed Ritchie's use of epigraphs. Overall, I think it was a good thing to read. If you are interested in improving the science incentive structure, I think it would be highly advisable to read this book. . more

Science is an ancient and vaunted establishment. It has done so much good for the world that it is sometimes easy to forget that science is a human construct. Scientists are human beings and are prone to making mistakes. This issue already surfaced for me in nutritional science. It seemed that every week a new food was discovered to cause cancer or to extend your lifespan. It sickens me to no end.

Science Fictions is by Stuart Ritchie. It discusses the various issues that scientists have to deal Science is an ancient and vaunted establishment. It has done so much good for the world that it is sometimes easy to forget that science is a human construct. Scientists are human beings and are prone to making mistakes. This issue already surfaced for me in nutritional science. It seemed that every week a new food was discovered to cause cancer or to extend your lifespan. It sickens me to no end.

Science Fictions is by Stuart Ritchie. It discusses the various issues that scientists have to deal with when doing science. Ritchie goes into several ways that scientists manipulate their data or lie. The system itself is flawed since scientists only post positive results. They forget that the simple act of reporting the research would save someone time down the road. The book highlights these issues and provides some solutions. . more

Succinct and often funny diagnosis of the currently broken scientific system. Bad incentives combine with human drive towards status and prestige and intersect with organizational desire for profit. A recipe for the disaster that has been unfolding for at least last 20 years. Not much here is new to those familiar with modern scientific system but still worth a read for a condensed and comprehensive overview of the issues.
Prior to reading this book my I had a sceptical stance towards most publi Succinct and often funny diagnosis of the currently broken scientific system. Bad incentives combine with human drive towards status and prestige and intersect with organizational desire for profit. A recipe for the disaster that has been unfolding for at least last 20 years. Not much here is new to those familiar with modern scientific system but still worth a read for a condensed and comprehensive overview of the issues.
Prior to reading this book my I had a sceptical stance towards most published research with a view of "anything published might be true", right now I think my stance is closer to "anything published is possibly false unless replicated"

Science Fictions is an important book, which is aimed primarily at a non-scientific audience. It builds upon a number of other books about scientific practices, and covers some of the same ground as for work by Ben Goldacre [1, 2] who the author cites. The author is a lecturer in Psychology, and most of the examples are drawn from this area, medicine, and social sciences in general - you can expect to read some discussion about for e.g. the Andrew Wakefield scandal, Cochrane reviews, pre-study r Science Fictions is an important book, which is aimed primarily at a non-scientific audience. It builds upon a number of other books about scientific practices, and covers some of the same ground as for work by Ben Goldacre [1, 2] who the author cites. The author is a lecturer in Psychology, and most of the examples are drawn from this area, medicine, and social sciences in general - you can expect to read some discussion about for e.g. the Andrew Wakefield scandal, Cochrane reviews, pre-study registration, and about how perverse incentives hinder science.

The book starts by discussing the 'what' of science - in particular, how it works practically. In theory, science is as simple as coming up with a hypothesis, designing an experiment, and testing that hypothesis, and then disseminating the results. In practice, the Ritchie argues convincingly that much of science is social. Scientists (postdocs or above) must first apply for a grant to fund their research. This is often, though not always, from the government, industry, or charities. In order to actually get the funding, scientists have to put forward grant proposals, which are judged by other scientists. Assuming that a scientist gets a grant, which vary in size hugely, they can then go on and do the experiment (often not themselves - grants usually contain funding for PhD students or Postdoctoral researchers) and test their hypotheses. At this point, they then write up their findings into an academic paper, and try and publish it in a journal. Papers undergo peer review, usually by one or two authors, and work becomes part of the scientific record.

Ritchie takes aim at research which is not replicable. Replicability is the idea of being able to take a piece of research, conduct the experiment in the same way as the original authors, and get broadly similar results. In particular, Ritchie speaks from his own field which is famously undergoing a 'replicability crisis' - many key results have been found not to be replicable when performed with larger sample sizes, more rigorous methods, or just generally subjected to higher scrutiny - as Ritchie points out, this can sometimes be due to simple mistakes in the analysis, particularly around randomisation of trial participants and statistics. In other cases, results were fraudulent to start with, and in others people attempting replication can't even try to do the experiment, because the authors haven't included enough information to attempt it. Ritchie discusses the well known fact, at least among scientists, that studies that make it into the popular discourse are often overhyped, often via press releases written by the scientists themselves which are copied almost verbatim by news outlets, and well received popular science books are often based on flimsy evidence, or hugely overstate the significance of the results. He also makes the argument, and I completely agree, that this replicability crisis is occurring in Psychology only because other fields are not looking very hard for it. Personally, I think that many scientists in other fields are complacent about this.

The reasons for this replicability crisis are complex, and I think Ritchie does a good job of discussing the mix of perverse incentives that drive people to publish work that don't stand up to scrutiny - the difficulty of publishing negative or null results leading to distortion in the published data towards positive results, the drive for promotion or even getting a permanent position at a University, the need to publish in 'high impact' journals, the requirement for a large number of papers to be successful in grant applications, and even more direct financial incentives such as grants paid to researchers by Universities. This leads to scientists doing things like splitting a single study in a practice known as 'salami slicing' into many papers, sometimes in a particularly egregious way, because the pure number of publications leads to better chances of promotion or getting grant funding.

Finally, the book moves on to discussions about improving the current situation. Some of these are obvious, such as requiring pre-study registration for fields, and for publication to be agreed in advance of the study. Others were particularly interesting and were unfamiliar to me he discusses ways of performing many analyses on the same dataset, known as 'multiverse analysis' to work out whether the results of a study are positive only because of a fluke of the choice of statistical approach chosen. I do think that in this section, Ritchie misses a chance to discuss in more detail the 'Open Science' movement. This spans a broad range of ideas, including anyone being able to access to publications that are publicly funded, the idea that anyone can download a dataset from the original paper, and re-run the analysis on it. He does not mention the fact that this is now a reality in the UK in most fields - as any scientist will know from the many e-mails they receive on the topic, all research funding by government research bodies requires research data to be uploaded in some form or another [3].

However, there are many problems with this practice. Richie argues in the book that there are cases such as in medical research where you can't release much of the data due to anonymity reasons, but there are also commercial confidentiality clauses with non-governmental funders where a researcher is funded by multiple sources. In addition, it's worth asking the question of what actually constitutes research data? My own field is in Physics, and in the papers I've worked on, we've taken the approach of trying to publish all of the simulation scripts (so someone can rerun the simulations, and get the same results), the data itself (so someone can re-run the analysis from scratch on the same data), and all of the scripts that we used to run the analysis ourselves. Other authors in the same field publish the bare minimum - usually just an Excel spreadsheet with data from their analysis from which the figures can be generated. There is no checking whether people are following the spirit or the letter of these requirements, and it's very time consuming to do it - time which most researchers do not have if they want to be promoted, especially given they get little credit for doing it properly by funders and their institutions. Even making data public when you want to do so is not straightforward - there just is not a clear way of doing it. In my last research study in my PhD, my research data consisted of around 200gb of data generated on a supercomputer. University services are generally set up to host this amount of data, relevant or not. In the end I had to cut this data down by removing data which could have been interesting to others, heavily compress it, and upload it to the CERN funded provider Zenodo which allows uploads of up to 50gb for free and provides a Digital Object Identifier (DOI) which allows the data it to be cited easily if reused.

The debate around making research software public as part of 'open science' is also becoming increasingly heated. Making software public in and of itself is not a panacea - the same software can give different results on different machines due to the underlying architecture, operating system or operating conditions. Even getting scientific software from other research groups to run is not straightforward and requires a high level of expertise in and of itself. Many scientists in computational areas argue that practices and expectations which are changing around research software are different to lab based subjects - if someone comes and uses your laboratory to run an experiment using equipment you have built, you would normally be named on the paper, which is certainly not the case for research software - most people working in an area will happily use tools like R and Python, and packages within but will not cite the paper even for the person who developed a particular statistical test they are using, let alone the person that has implemented it in the software they are using. People who implement scientific software and make it public have had a tough job sustaining a career within science, and the reality is that most people leave for greener pastures in industry, which is a great loss. There have recently been moves over the last 10 years to supporting people who work on research software - something which large numbers of people use - to have a career track of their own, known as Research Software Engineering [4] - and positively, this is something that funders across the world have begun to support explicitly. One of the big problems as I see it here, however, is that in explicitly training people from scientific fields to become software engineers, they open themselves up to much more lucrative job openings in the private sector, and this expertise required to help researchers is lost - effectively delaying the departure from academia rather than stopping it completely.

In addition to this, scientific software can be subject to high levels of scrutiny, even by non-experts. The open science movement is starting to suffer from a bit of a culture where anything that is not absolutely perfect is heavily criticised on social media channels such as Twitter - Dr Olivia Guest and Prof Kirstie Whittaker have discussed this in depth under the moniker of 'bropen science' [5]. If this is in research into a topic which is contentious among for e.g. a particular political leaning, non-scientists jump onto the bandwagon of criticising software in topics they do not understand - regardless of whether the studies people have produced are replicable by others with better software or not. A recent example could be the criticism of COVID19 modelling software produced by the Imperial College group, and which was used to inform lockdown policy throughout the world. [6] While this software clearly left a lot to be desired, other groups were able to replicate the study results using both the software as-is, and using their own software. As one can imagine, the resulting criticism can be overwhelming, unexpected for people who are trying to just do their best, and leads many people to want to do less open science rather than more. This sort of (usually unfounded) criticism by non-scientists often comes from similar sources as criticism of climate science, which Ritchie does touch upon.

All in all, I think that this is an excellent and well timed book. Throughout it, I nodded my head through much of it, being able to relate parts of it to experiences in my own scientific career, and I think that most most scientists who pick up a copy of this will do the same. I think also that making these criticisms available to the general public is important - as Ritchie says, scientists should have to work at least a bit harder for trust in them, which is something I strongly agree with. Scientists are human, and suffer the same flaws no matter what career they undertake, be it science or otherwise.


Ethics declarations

Ethics statement

The present study was approved by the author’s home university and run in accordance with the British Psychological Society code of ethical conduct. A potential conflict of interest has been declared throughout the ethics process because this study was funded by a San Francisco based company called All Turtles on behalf of Spot, and one of the three authors of this paper is the co-creator of Spot. Spot is an AI chatbot that was based in part on the results of the present research but has since been modified for broader purposes. The most recent version of Spot can be accessed for free by individuals via https://app.talktospot.com/. The AI CI used in this study was specifically designed for research purposes, and if you would like to conduct research using this version it is recommended that you contact one of the authors of the present paper.


The Less Than Truthful "Truth Project"

The Truth Project (TTP) is a 12-part DVD series produced by Focus on the Family to encourage Christians in an understanding of a biblical worldview, and especially its application for their lives. Now, before anyone gets too excited (or put off) regarding the title of this article, let me say that there is much in this series that deserves praise. There are, however, some serious problems.

The first thing that blessed me as I began my 13-hour tour through the episodes was Del Tackett's exhortation to his mostly college-age students to "think!" That's not exactly a common characteristic among today's postmodern and experientially prone generation, and that includes young evangelicals whose biblical "education," for the most part, has consisted of some form of entertainment. Of course, Tackett wants them to think biblically and deeply. Amen to that!

Del Tackett is an excellent communicator. He not only has a command of his subject matter, but he also exudes an infectious passion for the Lord and for His Word. You can see that those "students" selected to participate in the program weren't just props for a well-designed production and set—it appears that they were really understanding, perhaps for the first time, some of the biblical insights that were being taught.

Many of the teachings are solidly biblical, such as the episode that addressed "The Family," which alone may be worth the price of the entire series. We couldn't agree more with Tackett's professed desire to encourage all believers to have a love for the Scriptures and to get to know the heart of God through the revelation of His Word. In view of the sad fact that there are very few quality productions that deal with apologetics and are directed at young adults, I initially wrestled with whether to raise any of the critical issues that concerned me. In other words, I didn't want to put people off regarding a series that I believe has biblical value. What finally motivated me to address what I found to be problematic teachings were two thoughts: 1) Everything in life, in every way possible, needs biblical scrutiny. If it has the support of Scripture, then we need to be encouraged to make it a life support. 2) A stated objective of TTP is to exhort believers to think through all teachings, test all theories, doctrines, and dogmas. They encourage one to ask relevant questions, especially concerning the consequences regarding what is being taught—in particular, where is it leading or headed? That seemed to be the video's marching orders, so there should be no objections to my doing just that.

In spite of all that impressed me about The Truth Project, there were still some troubling aspects that tempered, even dampened, my enthusiasm at times. It reminded me of an ocean voyage that I took before I started graduate school. It was a terrific cruise if you could handle the seas that the North Atlantic was dishing up. I'm not easily susceptible to seasickness, so it was quite enjoyable for me. The last thing that entered my mind was whether or not the ship was on course to Southampton, England. I assumed that it was and gave it no further consideration. That memory resurfaced as I thoughtfully sailed with the "good ship Truth Project."

I wasn't too far along into the series when I realized that some of the "crew and its captain" were Calvinists. Del Tackett, according to his biography, was introduced to the Reformed faith in the late 1970s and started The New Geneva Theological Seminary as a branch of Knox Seminary in 1992. Knox Theological Seminary is a ministry of Coral Ridge Presbyterian Church and is a bastion of Calvinism. Tackett is now associated with Coral Ridge Ministries and does a weekly TV program for them called Cross Examine. One of the main contributors to TTP is R. C. Sproul, perhaps one of the most influential Calvinists of our day. Although there is no overt teaching regarding Reformed Theology, its influence is noted throughout, including quotes from The Westminster Confession and a powerpoint slide presentation declaring man's needs: "Grace, Regeneration, Redemption." Calvinists teach that a person must be regenerated by (irresistible) grace before he can believe and be saved.

Perhaps even before my concerns about Calvinism, the fact that TTP was a production of Focus on the Family bothered me. Why? I can think of no ministry that has sown the seeds of psychotherapy among evangelicals more deeply than Focus on the Family, with psychologist James Dobson. Focus on the Family has made the humanistic teachings of self-esteem and self-love the pillars of their organization. Del Tackett was president of Focus on the Family Institute during the TTP production, and his organization's self-esteem bias shows itself as he declares that God has given everyone a "hunger for significance." That selfist teaching is certainly biblical—but not in a good way. It caused the fall of angels and mankind. Lucifer's desire to raise his "significance" level in heaven (Isaiah 14) and Eve's desire to be "as the gods" (Genesis 3) obviously indicated their "hunger for significance." Nevertheless, Tackett recognizes the errors of humanist psychologists Abraham Maslow and Carl Rogers. His criticism takes place in a TTP episode labeled "Anthropology." I find that odd because the issues in this episode are related specifically to psychology, which, curiously, is rarely mentioned in TTP. That missing topic appears less than honest, given the Focus on the Family connection.

Focus on the Family has not only been a chief promoter of psychological counseling it is the foremost referral service among evangelicals for professional psychotherapists. Although TTP says that it desires to turn young people from the ways of the world to a biblical worldview, it seems to have intentionally avoided that "sphere" of psychology and its devastating effects upon mankind.

The prestigious Princeton Review reports that the number two most popular field of study for all U.S. college students (secular or Christian) is psychology. Young evangelicals, perhaps even more so, are attracted to and encouraged to choose a career in the pseudo-science of psychotherapy. Ironically, here is what Dr. James Dobson recommends: "Christian psychology is a worthy profession for a young believer, provided his faith is strong enough to withstand the humanistic concepts to which he will be exposed. " No. So-called Christian psychology is both a contradiction in terms and the chief dispenser of "self" teachings in the church (see Psychology and the Church), yet too few are warning our next generation about this incredibly destructive worldview.

There was certainly no hint of an alarm in The Truth Project! The series devotes two hour-long sessions to exposing the pseudo-science of evolution but clearly avoids the even more spiritually deceptive pseudo-science of psychological counseling.

The single-most puzzling item in TTP is the inclusion of a man who unequivocally represents a false gospel: Roman Catholic priest Robert Sirico. Who made that decision? And why is he in The "Truth" Project? Although nothing could be worse than featuring a man, under the guise of truth, who participates in leading one billion-plus souls away from the biblical truth, Sirico has some other issues. Prior to his being ordained a Paulist priest, he was a minister for the Metropolitan Community churches, a professing evangelical movement that was developed to refute the biblical condemnation of homosexuality. In 1975, Sirico performed the first-ever civil-licensed same-sex marriage. He is now a pastor of St. Mary Catholic Church in Kalamazoo, MI, and president of Acton Institute, an organization that promotes the "coming together of faith and liberty" through "integrating Judeo-Christian Truths with Free Market Principles." His "faith" is not the faith, for which Jude urged believers to contend. But such distinctions are increasingly blurred in this ecumenical age. One of Acton Institute's board members is Dr. Gaylen J. Byker, the current president of Calvin College. Sirico seems to hold the same attraction for evangelicals as does Mormon Glenn Beck, both of whom are very vocal in their promotion of "getting America back to her Christian roots."

Almost none of "Father" Sirico's involvement in TTP seems to make sense (especially considering the historic anti-Catholicism of Calvinism) until one pulls back to see the direction in which the "ship" is headed. It seems to be a reprise of Christian Reconstructionism directed at this next generation of evangelicals. Reconstructionism is a Calvinist-based movement that was popularized by Rousas Rushdoony, Gary North, Greg Bahnsen, and Gary DeMar. Also known as Theonomy, it proposes social and civil governments that are based strictly upon the Laws of God in the Old and New Testaments.

Reconstructionists believe that through the application of God's Laws the earth will be transformed and the Kingdom of God ushered in. Del Tackett preaches that concept in subtle and not-so-subtle ways throughout the series. There's little doubt that he is attempting to emulate John Calvin's vision for the city of Geneva, which Calvin hoped would be a utopia of Christian principles worked out in daily life. That may be the reason Tackett called the seminary that he founded The New Geneva. He is currently on its Board of Directors and a faculty member. Calvin's "biblical Law" experiment in Geneva, however, became so legalistic that he was referred to as "the Protestant Pope."

Calvin's historic failure to apply the Law didn't seem to dissuade Tackett in The Truth Project. For example, he turns to the Fourth Commandment as a principle for New Testament Christians to incorporate into their lives regarding what he calls "The Sphere of Labor." Although the commandment is directed at the Israelites, instructing them to "remember the Sabbath day, to keep it holy," Tackett presents it as a principle meant for believers, especially regarding their attitude toward work. This is Theonomy, as well as being a misapplication of the Scriptures. Nine of the 10 Commandments involve moral issues (do not steal, lie, murder, etc.) that are written upon the conscience of man the Fourth Commandment is not. It is a separation law written for and to be obeyed exclusively by the Israelites ( Exodus:16:29 See, for that the LORD hath given you the sabbath, therefore he giveth you on the sixth day the bread of two days abide ye every man in his place, let no man go out of his place on the seventh day.
See All. 31:14-16 Deuteronomy:5:15 And remember that thou wast a servant in the land of Egypt, and that the LORD thy God brought thee out thence through a mighty hand and by a stretched out arm: therefore the LORD thy God commanded thee to keep the sabbath day.
See All. , etc.). We can certainly appreciate Tackett's application of New Testament instructions for the believer today, but attempting to apply the Laws of Moses could constitute legalism, as well as leading to "another gospel" ( Galatians:1:6-7 [6] I marvel that ye are so soon removed from him that called you into the grace of Christ unto another gospel: [7] Which is not another but there be some that trouble you, and would pervert the gospel of Christ.
See All. ).

Reconstructionism is never mentioned, but The Truth Project's suggested reading material list is loaded with Amillennialists/Calvinists such as Abraham Kuyper and A.W. Pink, some key Reconstructionist figures such as Rushdooney and DeMar, and Coalition On Revival enthusiast Gary Amos, among others. Reconstructionists are Calvinists, and many, if not most are amillennialists and preterists (with some notable exceptions).

This means they believe that the church and the world are now in the Millennium and that nearly all the prophesies of the Bible have been fulfilled. That may be why prophecy is nowhere to be found in The Truth Project, which is a huge loss for the hope of developing a confident biblical worldview. Fulfilled prophecy is the best apologetic for proving that the Word of God is of supernatural origin and that we can turn to it with great assurance. It also indicates what lies ahead for the church and the world. Simply and clearly, Scripture foretells that the imminent Rapture of the church, the Great Tribulation, the Second Coming, the Millennial Reign of Christ, the Dissolving of Our Present Heavens and Earth, and the Creating of a New Heaven and New Earth, will all take place, in that order. (See Temporal Delusion)

The reason that this isn't presented in TTP no doubt has to do with its eschatological perspective. The above prophetic biblical scenario does not fit with amillennialism (or postmillennialism) or any of the other attempts to "transform our culture," "restore our nation," or "fix the world's temporal economic, social, health, injustice, ecological, and other problems." All of this contributes to a temporal delusion, which is simply not biblical. The "worldview" of Scripture is not global transformation, a term used throughout TTP—not, that is, until the Millennial reign of Jesus Christ. Even then, it will not be a perfect society because the Bible tells us that it will end with the rebellion of those who went along with the laws and principles of Christ's rule but who never committed their hearts and minds to Him.

Tackett chides those who have a "why polish the brass on a sinking ship?" mentality. He seems to be referring to Christians who have abandoned their biblical responsibilities because of an erroneous interpretation of Scripture. We would also take issue with those who think like that, if it indeed characterizes their attitude. The true scriptural view is that the events presented above will literally take place and need to be considered in regard to any plans or agendas of men or ministries. We should not expect worldwide revival when the Bible indicates that the Last Days will be characterized by great spiritual deception in the world and apostasy in the church. Does that mean that we bail out on the world? No. But there is no scriptural basis for believing that the world will be or can be transformed through biblical law or biblical principles. To truly believe that, one would have to literally excise the Book of Revelation from Holy Writ, along with hundreds of other scriptures.

We believe that the mandate for believers in our day is a rescue operation of individual lost souls, not a program of collective global transformation. True believers certainly need to reflect the teachings and love of Christ in every aspect of their lives, but they are to do so first and foremost to please the One who saved them. It may be that some will turn to Christ because of a believer's steadfastness and fruitfulness in the faith, but that will be the exception in this rebellious world, as biblical prophecy clearly indicates. The message of the gospel will never be popular in the world because "the preaching of the cross is to them that perish foolishness. unto the Jews [it is] a stumblingblock, and unto the Greeks foolishness. " ( 1 Corinthians:1:18 For the preaching of the cross is to them that perish foolishness but unto us which are saved it is the power of God.
See All. ,23). Furthermore, rather than drawing the multitudes to Christ by example, Scripture states, ". all that will live godly in Christ Jesus shall suffer persecution" ( 2 Timothy:3:12 Yea, and all that will live godly in Christ Jesus shall suffer persecution.
See All. ). Jesus declared, "If the world hate you, ye know that it hated me before it hated you. If ye were of the world, the world would love his own: but because ye are not of the world, but I have chosen you out of the world, therefore the world hateth you" ( John:15:18-19 [18] If the world hate you, ye know that it hated me before it hated you. [19] If ye were of the world, the world would love his own: but because ye are not of the world, but I have chosen you out of the world, therefore the world hateth you.
See All. ).

In summary, The Truth Project, in our view, is akin to a troop support ship with a mixture of supplies aboard that need to be carefully scrutinized. More importantly, if the ship's compass is off even a few degrees, the vessel will not reach its intended port. TTP has some excellent content along with some erroneous teachings, but its "intended port" of transforming the world is not on the course set by the Scriptures. TBC


Zecharia Sitchin and Our Alien Anecestors

Zecharia Sitchin (1920-2010) studied Economics at the University of London and was best known for his fringe theories on the origins of Earth and man-kinds celestial ancestry (alien ancestry). According to his official website, www.sitchin.com, he is “one of few scholars able to read and interpret ancient Sumerian and Akkadian clay tablets.” His interpretations and theories were compiled into his seven books known as The Earth Chronicles. In his first novel, The 12 th Planet and its sequels Sitchin claims there is a 12 th planet beyond Neptune known as Nibiru that reaches our inner solar system once every 3,600 years. According to Sitchin, an advanced race of human-like extraterrestrials called the Anunnaki live on Nibiru and are the missing link in Homo sapiens evolution. There have been no new postings on Sitchins official webpage since 2017 but some 4,126 people follow the Zecharia Sitchin Facebook page which continues to make posts to this day. Additionally, Sitchin’s books have sold millions of copies worldwide and have been translated into almost 20 languages so his influence is certainly noteworthy. The belief in Zecharia Sitchin and what he professed is important because it attempts to provide an answer to some of humanities timeless questions, namely, “Why are we here?” and “How did we come to be here?” However, his explanations provide an extraordinary answer because they contradict our current knowledge regarding our solar system and the celestial bodies found therein and cannot be scientifically proven nor disproven as the only evidence is based upon subjective interpretations.

The Annunaki arrived on Earth 450,000 years ago looking for minerals, namely gold which they began mining in Africa. When Anunnaki miners became displeased with working conditions it was decided that Anunnaki genes and Homo erectus genes would be engineered to create slaves to replace the miners, thus resulting in Homo sapiens, or man-kind as we know it. The evidence sited in support of this belief can be found through a link on the Facebook page which takes you to a website called, enkispeaks.com. There we see evidence quoted from his first book which included varies statements pertaining to Sumerian space maps which showed planets which would have been beyond their ability to detect. “Sumerians lacked telescopes and couldn’t see Uranus’ and Neptune’s orbits the route maps (from Nibiru to Earth) show. Nibirandictated maps prove they had astronomical info Sumerians, on their own, didn’t. The maps accurately detail the entire Earth from space, a perspective impossible for ancient Sumerians on their own.” (Sitchin 275) This map was discovered on a clay tablet in the ruins of the Royal Library at Nineva. Additionally, on Sitchin’s official website there is an article pertaining to an article published in Sciencemagazine by Mathieu Ossendrijver in January 2016 which discusses a 350-50 BCE Babylonian cuneiform tablet that accurately details the position of Jupiter based on geometrical calculations. This article is offered as evidence for the planetary knowledge of ancient civilizations that they were not expected to have, so it therefore is assumed to have come from the Anunnaki. In opposition to these beliefs we see experts such as Dr. Michael S. Heiser who holds a Ph.D in the Hebrew Bible and Semetic Languages posing critical questions to Sitchin regarding his interpretations of the Sumerian texts. Heiser asserts that while Anunnaki is indeed found in Sumerian Literature (182 times, according to Heiser) there is no mention of a connection between them and Nibiru, or a 12 th planet. Heiser also questions Sitchin’s reasoning for interpreting Sumerian words such as “naphal” to mean fire, or rockets which leads to an interpretation of the word “Nephilim” to mean “people of the fiery rockets.” Heiser asserts that his interpretation of this word is without accurate explanation nor is there a single, ancient text where naphal has that meaning.

Zecharia Sitchin’s lack of a formal education in Semitic Studies likely led to an inaccurate and therefore misinformed reading of the Sumerian texts. One could argue he suffered from confirmation bias as he moved through the literature distorting the meaning of certain words in an ignorant effort to fit his beliefs. Furthermore, we see a section on Sitchin’s official website discussing a Washington Post article from November 2017 wherein the senior scientist of NASA, David Morrison, PH.D states that Nibiru is not real and that there is no 10 th planet. The author of the website responds with a red herring stating that, “he [Morrison] just wants to get on with his real work and not worry about answering questions.” This in no way addresses Morrison’s statement nor does it provide evidence that argues against it.

My first introduction to Zecharia Sitchin and his books was through my parents who are both dis-fellowshipped Jehovah’s Witnesses. After leaving “the truth” my parents were in search of a new truth that answered the big questions that their previous faith no longer did. However anecdotal I imagine many previously religious people who are no longer sure of their belief in a traditional God could find themselves drawn to the appearance of science in Sitchin’s books. As more secular voices are made heard through the internet there is an increasing availability for confirmation bias among belief communities, as well as increased access to “bad science” with no guide posts for truth. Sitchin’s theories are appealing to those who now seek a more “scientific” answer to questions that were previously answered by religion. Moreso, Sitchin relies on texts such as the Bible (Genesis) which may be an added comfort to new believers as it is already familiar. Furthermore, Sitchin’s books being translated into over 20 languages bridges communication gaps and widens the base of believers to extend beyond a single region or language.

Even after Sitchin’s death in 2010 “scientific evidence” for his books was still being shared on his website up until 2017 and many other scholars have written about his work and have added their own supportive evidence as seen through the Zecharia Sitchin Facebook page. This ongoing dialogue could provide believers with comfort and assurance that what they’ve put stock in is continually “proven” and discussed by those seen as experts, even to this day.


The Effect Effect

Photograph by Andreas Rentz/Getty Images for Burda Media.

In 1969, the psychologist Robert Zajonc published an article about a curious study. He’d posted a silly-sounding word—either kardirga, saricik, biwonjni, nansoma, or iktitaf—on the front page of some student newspapers in Michigan every day for several weeks. Then he sent questionnaires to the papers’ readers, asking them to guess whether each word referred to “something ‘good’ ” or “something ‘bad.’ ” Their answers were consistent, if a little strange: Nonsense words that showed up in print many times were judged to be more positive than those that appeared just once or twice. The fact of their repetition, said Zajonc, gave the words an aura of warmth and trustworthiness. He called this the mere exposure effect.

Maybe you’ve heard about this study before. Maybe you know a bit about Zajonc and his work. That’s good. If you’ve already seen the phrase mere exposure effect in print, then you’ll be more likely to believe that it’s true. That’s the whole point.

Psychologists have devised other ways to make a message more persuasive. “You should first maximize legibility,” says Daniel Kahneman, who describes the Zajonc experiment in Thinking, Fast and Slow, a compendium of his thought and work. Faced with two false statements, side-by-side, he explains, readers are more likely to believe the one that’s typed out in boldface. More advice: “Do not use complex language where simpler language will do,” and “in addition to making your message simple, try to make it memorable.” These factors combine to produce a feeling of “cognitive ease” that lulls our vigilant, more rational selves into a stupor. It’s an old story, and one that’s been told many times before. It even has a name: Psychologists call it the illusion of truth.

See how it works? A simple or repeated phrase, printed in bold or italics, makes us feel good it just seems right. For Kahneman, that’s exactly what makes it so dangerous. He’s been working on this problem since 1969, when he met his late collaborator, Amos Tversky, at the Hebrew University in Jerusalem. Their famous project, for which Kahneman won a Nobel Prize in 2002, was to illuminate and categorize the pitfalls of intuition, and show that the “rational actor” of economic theory was a fiction. We’re all subject to a set of reliable biases and illusions, they argued our decisions are consistently inconsistent. For their first major paper, published in Science in 1974 and reprinted in the appendix of Thinking, Fast and Slow, Kahneman and Tversky sorted through the foibles of human judgment and laid out a menu of our most common mistakes. Here was a primer on how perceptions go wrong and a guide for their diagnosis.

The Science paper ticked off some 20 effects and biases, many reduced to simple phrases and set off in italics to make them easier to follow. Thinking, Fast and Slow updates this list with another four decades of work in the field, amounting to a Diagnostic and Statistical Manual of Mental Disorders for the irrational mind. In the course of 418 pages, Kahneman designates no fewer than three biases (confirmation, hindsight, outcome), 12 effects(halo, framing, Florida, Lady Macbeth, etc.), four fallacies(sunk-cost, narrative, planning, conjunction), six illusions(focusing, control, Moses, validity, skill, truth), two neglects (denominator, duration) and three heuristics(mood, affect, availability). A new characterization of how we misjudge the world—and a new catchphrase that we might use to describe it—appears in almost every chapter of the book. That’s Kahneman’s goal: He’s trying to give us “a richer language” for talking about decisions, he says, and “a precise vocabulary” for their analysis.

It’s a promising thought, but to place this book in the rubric of self-help would be to mistake Kahneman—who lived for several years in Nazi-occupied France—for a benighted optimist. Again and again he reminds us that having the means to describe your own bias won’t do much to help you overcome it. If we want to enforce rational behavior in society, he argues, then we all need to cooperate. Since it’s easier to recognize someone else’s errors than our own, we should all be harassing our friends about their poor judgments and making fun of their mistakes. Kahneman thinks we’d be better off in a society of inveterate nags who spout off at the water-cooler like overzealous subscribers to Psychology Today. Each chapter of the book closes with a series of quotes—many suggested by the author’s daughter—that are supposed to help kick off these enriching conversations: You might snipe to a colleague, for example, that “All she is going by is the halo effect” or maybe you’d interrupt a meeting to cry out, “Nice example of the affect heuristic,” or “Let’s not follow the law of small numbers.”

This imaginary world of psycho-gossip and thought correction sounds like a very annoying place. And while Kahneman’s book offers some clear and engaging examples of how our minds work—or don’t work—it’s never clear whether the propagation of his catchphrases would really improve our lives. Even if organizations and governments can benefit from a rich language of cognitive bias, what would it mean for individuals? Do new ways of talking lead us to make better judgments from one day to the next? (One might as well ask whether the adoption of Freudian terms in the 20 th century helped us to manage our ids.)

Whatever its merits, Kahneman’s program—to gift us with a “precise vocabulary” of illusions—plays out according to his own rules and logic. He packages his findings about decision-making into tiny marketing campaigns full of branded notions that worm their way into our heads like viral media. Repeating a slogan makes it seem safer and saner it elevates his ideas to the level of truthiness. Remember the mere exposure effect? Kahneman wants us to develop a gut feeling about the inadequacy of gut feelings.

It’s a trick that’s become de rigueur in a certain type of science writing, and a fundament in the burgeoning field of “ideas” journalism. Let’s call it the effect effect: Reduce whatever you’re talking about to a single, italicized phrase, so much the better for tapping into a network of TED talks and Radiolab broadcasts, and then repeat, repeat, repeat. I’ve done it in my own writing, and it’s the driving force in a long run of pop-psych best-sellers (most of which are cited in Kahneman’s book). Nassim Taleb’s The Black Swan (2007) brims over with these coinages: He’s got the strengthening effect, the tournament effect, the halo effect, the silent evidence effect, the hedgehog effect, the winner-take-all effect, the butterfly effect, the spandrel effect, the Matthew effect, the nerd effect, and, of course, the black swan effect. James Surowiecki’s The Wisdom of Crowds (2004) describes the Matthew effect, the reputation effect, the cooperator bias, the confirmation bias, and the long-shot bias. Christopher Chabris and Daniel Simons’ The Invisible Gorilla (2010) gives us the blur effect, the Mozart effect, the expectancy effect, the halo effect, and the Hawthorne effect. More effects crop up in Richard Thaler and Cass Sunstein’s Nudge (2008), in Jonah Lehrer’s How We Decide (2009), in Malcolm Gladwell’s books, in pieces about Malcolm Gladwell’s books, in Slate, in Slate, and in Slate. The same catchphrases even recur from one best-seller to the next, emerging in different contexts, slightly altered or not at all, like a thinking man’s LOLcats. If these ideas are good and useful—as many of Kahneman’s seem to be—then everybody wins. But how would you know for sure?

The ubiquity of the effect effect raises a couple of questions of its own. Is there a point at which we’ll have reached a state of overdiagnosis, where these self-help catchphrases have become so plentiful and diverse that we can no longer remember what they mean? Psychiatrists are just now negotiating with the same concern: As their standard DSM guide swells to include marginal disorders like Internet addiction, excessive sex, and prolonged bitterness, critics worry that even the best-established forms of disease might get hopelessly diluted. Could the same happen in pop-psychology? Eventually we’ll be so inundated with “effects” that the word effect will lose its effect. Maybe that’s already happened.

Another question arises from the fact that so many books repeat the same basic message, and invoke such similar “effects,” to explain how our intuitions can help us and hurt us. All these books are best-sellers are the same people reading each one? (What about the best-selling New Atheist manifestos—how many readers end up buying The End of Faith, and The God Delusion, and also God Is Not Great?) Kahneman himself offers some insight into why certain types of books succeed in spite of their redundancies or because of them. There’s little in Thinking, Fast and Slow that hasn’t been said before, in books and journals and lots of magazine articles. It doesn’t matter, though. One of the new book’s lessons is that familiarity is easy. It feels good. We have a tendency to like what we’ve seen before.

I might be inclined to tell you that I enjoyed this book, that its shopworn examples are well-chosen and nicely told, but there may be no point. Why should anyone believe me? I may be another victim of the effect effect, and the same goes for you.


Is Confirmation Bias Real?

Many psychologists hold that the bias is a “pervasive” (Nickerson 1998: 175 Palminteri et al. 2017: 14), “ineradicable” feature of human reasoning (Haidt 2012: 105). Such strong claims are problematic, however. For there is evidence that, for instance, disrupting the fluency in information processing (Hernandez and Preston 2013) or priming subjects for distrust (Mayo et al. 2014) reduces the bias. Moreover, some researchers have recently re-examined the relevant studies and found that confirmation bias is in fact less common and the evidence of it less robust than often assumed (Mercier 2016 Whittlestone 2017). These researchers grant, however, the weaker claim that the bias is real and often, in some domains more than in others, operative in human cognition (Mercier 2016: 100, 108 Whittlestone 2017: 199, 207). I shall only rely on this modest view here. To motivate it a bit more, consider the following two studies.

Hall et al. (2012) gave their participants (N = 160) a questionnaire, asking them about their opinion on moral principles such as ‘Even if an action might harm the innocent, it can still be morally permissible to perform it’. After the subjects had indicated their view using a scale ranging from ‘completely disagree’ to ‘completely agree’, the experimenter performed a sleight of hand, inverting the meaning of some of the statements so that the question then read, for instance, ‘If an action might harm the innocent, then it is not morally permissible to perform it’. The answer scales, however, were not altered. So if a subject had agreed with the first claim, she then agreed with the opposite one. Surprisingly, 69% of the study participants failed to detect at least one of the changes. Moreover, they subsequently tended to justify positions they thought they held despite just having chosen the opposite. Presumably, subjects accepted that they favored a particular position, didn’t know the reasons, and so were now looking for support that would justify their position. They displayed a confirmation bias. Footnote 3

Using a similar experimental set-up, Trouche et al. (2016) found that subjects also tend to exhibit a selective ‘laziness’ in their critical thinking: they are more likely to avoid raising objections to their own positions than to other people’s. Trouche et al. first asked their test participants to produce arguments in response to a set of simple reasoning problems. Directly afterwards, they had them assess other subjects’ arguments concerning the same problems. About half of the participants didn’t notice that by the experimenter’s intervention, in some trials, they were in fact presented with their own arguments again the arguments appeared to these participants as if they were someone else’s. Furthermore, more than half of the subjects who believed they were assessing someone else’s arguments now rejected those that were in fact their own, and were more likely to do so for invalid than for valid ones. This suggests that subjects are less critical of their own arguments than of other people’s, indicating that confirmation bias is real and perhaps often operative when we are considering our own claims and arguments.


Evaluation of gender bias

"Another way to reduce gender bias is to take a feminist approach which attempts to restore the imbalance in both psychological theories and research. For example, feminist psychology accepts that there are biological differences between males and females: Research by Eagly (1978) claims that female are less effective leaders than males. However, the purpose of Eagly&rsquos claim is to help researchers develop training programmes aimed at reducing the lack of female leaders in the real-world."

how is discussing how to reduce gender bias an evaluation point? isn't evaluation meant to be strengths/weaknesses?

Well the A03 of Gender bias is like evaluating an evaluation point. This is because typically you use gender bias as A03 for something else. Soooo actually evaluating gender bias as it&rsquos own thing is quite difficult (I think evaluating issues and debates is really hard).

Your best bet is an example of real life application, so name/explain a study that displays it like Asch. He used only male participants yet generalised to everyone. So this is a very good example of Beta bias. Then you would explain that this is an example of gender bias, therefore the theory of gender bias has real life application showing there is some truth behind it etc. The fact that actual studies display gender bias is a strength of gender bias. I hope that makes sense!

So if I had a 4 mark question for instance &lsquoDescribe one strength of gender bias&rsquo I would write something like:

Gender bias has plenty of real life application. For instance, Asch used only male participants in his conformity study yet generalised his findings to both men and women. This is just one example amongst many of beta bias. This is a strength of gender bias because it increases the validity of the theory as it is actually seen in many real studies.

(I&rsquom basically saying: it&rsquos a real thing and this is why). I have to say I haven&rsquot looked at any of your links but this is just what I would write😊😊😊

(Original post by imundercover)
"Another way to reduce gender bias is to take a feminist approach which attempts to restore the imbalance in both psychological theories and research. For example, feminist psychology accepts that there are biological differences between males and females: Research by Eagly (1978) claims that female are less effective leaders than males. However, the purpose of Eagly&rsquos claim is to help researchers develop training programmes aimed at reducing the lack of female leaders in the real-world."

how is discussing how to reduce gender bias an evaluation point? isn't evaluation meant to be strengths/weaknesses?

(Original post by JessAntonia:))
Well the A03 of Gender bias is like evaluating an evaluation point. This is because typically you use gender bias as A03 for something else. Soooo actually evaluating gender bias as it&rsquos own thing is quite difficult (I think evaluating issues and debates is really hard).

Your best bet is an example of real life application, so name/explain a study that displays it like Asch. He used only male participants yet generalised to everyone. So this is a very good example of Beta bias. Then you would explain that this is an example of gender bias, therefore the theory of gender bias has real life application showing there is some truth behind it etc. The fact that actual studies display gender bias is a strength of gender bias. I hope that makes sense!

So if I had a 4 mark question for instance &lsquoDescribe one strength of gender bias&rsquo I would write something like:

Gender bias has plenty of real life application. For instance, Asch used only male participants in his conformity study yet generalised his findings to both men and women. This is just one example amongst many of beta bias. This is a strength of gender bias because it increases the validity of the theory as it is actually seen in many real studies.

(I&rsquom basically saying: it&rsquos a real thing and this is why). I have to say I haven&rsquot looked at any of your links but this is just what I would write😊😊😊


Implicit Bias

Thoughts and feelings are “implicit” if we are unaware of them or mistaken about their nature. We have a bias when, rather than being neutral, we have a preference for (or aversion to) a person or group of people. Thus, we use the term “implicit bias” to describe when we have attitudes towards people or associate stereotypes with them without our conscious knowledge. A fairly commonplace example of this is seen in studies that show that white people will frequently associate criminality with black people without even realizing they’re doing it.

Why it matters:

The mind sciences have found that most of our actions occur without our conscious thoughts, allowing us to function in our extraordinarily complex world. This means, however, that our implicit biases often predict how we’ll behave more accurately than our conscious values. Multiple studies have also found that those with higher implicit bias levels against black people are more likely to categorize non-weapons as weapons (such as a phone for a gun, or a comb for a knife), and in computer simulations are more likely to shoot an unarmed person. Similarly, white physicians who implicitly associated black patients with being “less cooperative” were less likely to refer black patients with acute coronary symptoms for thrombolysis for specific medical care.

What can be done about it:

Social scientists are in the early stages of determining how to “debias.” It is clear that media and culture makers have a role to play by ceasing to perpetuate stereotypes in news and popular culture. In the meantime, institutions and individuals can identify risk areas where our implicit biases may affect our behaviors and judgments. Instituting specific procedures of decision making and encouraging people to be mindful of the risks of implicit bias can help us avoid acting according to biases that are contrary to our conscious values and beliefs.

Implicit bias is a universal phenomenon, not limited by race, gender, or even country of origin. Take this test to see how it works for you: Implicit Bias Test

Learn more:

Implicit Bias sits at the core of our previously published reports. Most recently, Transforming Perception documents how implicit bias shapes the lives of black men and boys, and Telling Our Own Story: The Role of Narrative in Racial Healing integrates implicit bias insights with a discussion of how narrative can work to undo the harms of discrimination.