• The technical term for these tiny shortcuts is questionable research practices. They are difficult to study, because scientists are publicly shaming their peers when they use such practices, but in private they continue to use them.

    It all depends on the integrity of the researcher, which in turn depends on their upbringing (the example set by their academic advisor).

    Relevant links to two podcast episodes about p-hacking:

    - https://nulliusinverba.podbean.com/e/p-hacking-i/

    - https://nulliusinverba.podbean.com/e/p-hacking-ii/

  • I think part of the problem is that what is often referred to as "science" these days is very different from the hard sciences of yesteryear.

    There are a lot of "soft" sciences that get increasingly softer every year. Social sciences, gender and women's studies, political science, some of the fast and loose use of "economics" these days.

    There are a lot of "studies" these days that are little more than slanted questionnaires or selective correlational studies with wild unsupported theories as to the results.

    • > There are a lot of "soft" sciences that get increasingly softer every year. Social sciences, gender and women's studies, political science, some of the fast and loose use of "economics" these days.

      I don’t think anyone is claiming these are sciences, except perhaps economics. I think you’re fighting a straw man

      • People get doctorates in these fields and post studies in journals that get picked up by thinktanks and media outlets. It's "science" for all intents and purposes; they're used as a source of authority based on data and analysis and formal papers.
  • If a scientist is doing more work to secure grants than doing science (my understanding is that this is very common), trying to justify their own existence, then I wouldn't be surprised that results get skewed towards that end.

    If every software engineer and developer had to do more work justifying their own existence than actually coding and developing, I suspect overall software quality would be worse than it is today.

    • Heh, in support it would be kind of like metrics on numbers of tickets.

      The first thing you game would be splitting any big work tickets in to smaller and smaller tickets. In some senses you should ensure any 'different' work on the same call is already split up, but typically you do this within reason.

      After that you get into the bullshit number generating phase. Tickets for work not done (fraud imposed by unrealistic number requirements), tickets that have zero relevance or meaning (ticket stuffing). Tickets covering conversations or other things that shouldn't be a ticket.

      You're not measuring what you actually want, you're measuring what you can measure, so people start producing what you can measure and not what you really want. Aka, Goodhart's Law.

  • At a certain point, the distinction between "fraud" and "not fraud" is a red herring. The downstream effects start to become similar enough.
  • The article misattributes the cause of the public's loss of trust in science.

    The public has lost confidence in science because commercial and political entities have worked systematically to undermine scientific authority that threatens their business models and narratives.

    Fossil fuels, tobacco, and refined sugar single-handedly manufactured decades of scientific "controversy" around climate change, tobacco's health impact, and the role of sugars and fats in obesity and heart disease. Religious fundamentalists fund pseudo-scientific books and articles attempting to muddy the waters concerning geological and biological evidence about any time period before the invention of agriculture. Grifters fabricate arguments and data to delegitimize Western medicine to sell placebos at high markup. Politicians attribute all unflattering research to partisan skullduggery.

    It is true that hard science, social science, and medicine all have published flawed work; sometimes even in bad faith. These occasional failures -- which should be corrected and never tolerated -- are then used by the same people to indict the entire enterprise.

    It's true that there's research misconduct, and it must be weeded out.

    What's also true is that funding for research is declining in the US, especially relative to our need for it.

    Note how many "tweaks" center on things like sample-size or statistical significance. Human-oriented research is always intractably complex, and we get the scientific outcomes we do because studies are perennially understaffed and under-resourced relative to the questions they seek to answer. Trying to answer fundamental questions about health and wellness with 30 people tracked over 6 weeks is fundamentally flawed, but it's the best we can do with the resources we have.

    In a world where funding was plentiful, and career paths not cut-throat and perilous, there'd be far fewer examples of these kinds of "tweaks." People respond to incentives, and if the options are "add a few more participants, because after all the initial sample size was somewhat arbitrary" or "fail to publish, fail to graduate, fail to get a permanent position after a decade of post-secondary education because applicants must be perfect," the only surprise is that they aren't more common.

  • > It would go on to become Halsman’s most iconic image.

    Indeed!

    https://arthur.io/img/art/jpg/00017344b99caa17a/philippe-hal...