• What a mess.

    > One author of a case report was surprised to learn of the correction — because the case described in her article is true.

    So they managed to mess up even the correction of their giant mess.

    > correcting the correction "would be difficult."

    I bet. That's why they should have got it right in the first place. I would be absolutely ballistic if they would be libelling my work like that.

    • Yeah, they seem to have been quite sloppy with these vignettes.

      Thought note that in the situation of the mislabeled real case, the formal solution is could be a retraction of the entire highlight article since it is against the (poorly implemented) policy to have a real case study.

      Don't know how patient consent for being used in a case study works, did this author get a perpetual license, did they just copy something from another article they wrote, or from an article someone else wrote?

    • It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case, a gross overcorrection. I wonder if they flinched at the prospect of actually assessing the validity of all of them and decided it was safer to just disclaim them.
      • > It looks like they labelled all of them fiction based on a single instance of one of the authors fabricating their case

        Does it? That's directly at odds with what the article and editor say

        • > The corrections come following a January article in New Yorker magazine that mentioned one of the reports — “Baby boy blue,” ... was made up.

          > “Based on the New Yorker article, we made the decision to add a correction notice to all 138 publications..."

          Emphasis mine.

          • > While the instructions for authors for Paediatrics & Child Health has at times indicated the case reports are fictional, that disclosure has never appeared on the journal articles themselves.

            Sounds like they were asking authors for fiction, so probably plenty of them are.

    • > I would be absolutely ballistic if they would be libelling my work like that.

      Genuine question, could they sue for this? It seems like a pretty good case.

  • Speaking this as a spouse of a medical doctor -- case reports are sometimes a good way to increase the bullet point count in your CV if you are a medical resident. A lot of residents do that just for the sake of beefing up their CVs (to apply for fellowship for example).
    • I don't see anything wrong with that by itself; with the amount of patients doctors see there should be one once in a while that is worth reporting. Or are such cases so rare that the doctor is incentivized to lie?
      • sxg
        I think you may have missed the original commenter's point. Residents (and medical students) are highly incentivized to publish unrealistic numbers of papers and case reports. One case report doesn't cut it—you need literally dozens of publications to match into some of the most competitive residency and fellowship programs. The NRMP (match organizer) publishes a document every 2 years that summarizes all of these stats. The 2024 version is in the link below, and page 12 supports what I'm saying.

        https://www.nrmp.org/wp-content/uploads/2024/08/Charting_Out...

        • This is another example of Goodhart's law in action, right?

          Weirdly Pediatrics (chart 7) skews the other way (less publications tended to get into residency programs)? Are those doctors/administrators/programs somehow seeing through the nonsense?

          • I wonder if it's because pediatrics is not competitive unless applying to a top program.
        • 27.7 works to match derm. Holy crap that’s a lot. No way. We would be gods of skincare by now.
    • In vet med, case studies are still pretty important, but that's because vet med is in its infancy compared to human medicine. At least one case study, usually two, are required to be eligible to take boards. Future board renewals, I think for most boards, are "published one original piece of research or two case studies" among a slew of other requirements.
  • > The articles usually start with a case description followed by “learning points” that include statistics, clinical observations and data from CPSP.

    I can see the reason where fictional cases could be used here as teaching aid - based on real cases/ilnesses but simplified to make the learning points succinctly, but surely if the cases are being cited elsewhere someone should have raised the issue earlier?

    • Since it was for teaching I expect the case studies were always showing typical features of real cases, so there's nothing in the case vignette itself to give it away unless the author picks a funny name or something like that.

      Rather it would be the entire form of these short highlight articles that would make you keep searching for a proper citation, unless you're lazy or pressed for time.

    • Wouldn't citing actual cases be a HIPAA violation? I can see why they would invent example cases, based on real ones, especially if they are fairly pedestrian cases.

      I mean. Except if your pedestrian example does not reflect reality, then that is bad.

      • It's a privacy violation to reveal information that identifies the patient. It is not a violation (and is extremely common) to recount details without noting names, places, or even dates. Unless you already have access to a database of records you won't be able to track it down.

        It's even common during talks to display diagnostic images that have had any identifying marks redacted.

      • HIPAA is American, not Canadian.
  • I think this is mainly a case of the common "didn't notice when crucial literature for own published content was retracted, get caught with pants down when the replication police come knocking".

    Obviously the poor labelling is bad, but 9 bad citations per year isn't the end of science and better labelling wouldn't discourage all the lazy authors who chose to cite these highlight articles, it'll just shift whos is to blame.

    The real problem is hosting a review article about research that was retracted, and it sounds like they aren't moving very quickly on taking that piece down.

  • Original HN discussion about the case:

    https://news.ycombinator.com/item?id=46789205

    • Thank you, this really adds the missing context to this update about fictional case studies. The original read was compelling and also alarming.
  • This is fine, though somewhat belated. But it does nothing to deal with the public's growing distrust of science in general, and medical science in particular.
    • The "growing distrust" is due to a concerted disinformation campaign which is independent of the facts.

      There was indeed much negative information that the public was not aware of, and they should perhaps have held more skepticism than they did. But the gleeful acceptance of outright anti-science lies implies that they were never really in a position to make a sound judgment one way or the other.

      In those circumstances I'll settle for people reaching the correct action: that practically all accepted medicine is correct and they should follow their doctor's advice. If they choose to over-inflate the importance of things that do indeed go wrong, then they are the ones failing to reach valid conclusions.

      • No, it isn't. Anthony Fauci and Rochelle Walensky were both on record, on television, claiming that anyone who takes the covid vaccine will not contract the virus (sterilizing immunity). The medical community and public health in particular disgraced themselves by going all in on demonizing anyone who raised questions about the covid jabs, mocking Ivermectin as "horse paste", claiming cloth masks were very effective against respiratory viruses (they are not, and this has been known for decades), and even that the concept of acquired immunity from recovering from an infection doesn't exist. These are all trivially verifiable things that happened during covid madness, and instead of walking back some of their false claims they simply doubled down on blaming the anti-vaxxers (even after jab uptake exceeded 80%).
        • I thought it was "contract and spread" you can't even get your own disinfo straight
        • Like I said: every word out of your mouth there is a lie. Yes, I know the links you're about to hand me, to right-wing disinformation sites and actual news articles that don't say what you're pretending they say.

          These are straight out falsehoods, collected for you deliberately, which you are repeating because you didn't even pretend to examine them critically. There is no way to discuss the actual mistakes made during the pandemic when it takes me ten times as long to refute the lies you're spreading.

  • In the era of GitHub etc, if you're not giving out every single data point of your research, it should be assumed it's fake.
    • fsh
      The article is about case reports, not about empirical studies. Putting a fake case report on GitHub wouldn't make it any less fake.
      • > Putting a fake case report on GitHub wouldn't make it any less fake.

        Much easier to review for whomever wants to review it.

        • Obviously just sending it via email to the reviewers works just fine in practice anyway, the problem is really that they published a summary piece about research that was later retracted, but didn't take down their own article.
        • Do you know what a case report is?
        • Would it be easier, though? Medical records (in the US) are covered by HIPAA and, to my knowledge, there is no anonymized canonical record, similar to what we have for legal decision. Without that, how difficult would it be to just "make shit up"?
    • And then there be large amounts of fake data for the next generation of AIs to learn from.

      What is stopping anyone from faking the data they use in their research papers?

      Sure it might be verifiable but if the data was made to give the desired results, i.e. faked to be what is required for the paper.

    • out of context that makes sense...but in the context of a case report how do you implement that? The patients have privacy rights and the authors/doctors have a responsibliity to protect them. That doesn't justify this but it does force a conversation about what 'every single data point' means. Does it mean the patient's real name and social security number? their complete medical chart?

      Case reports are descriptive not determinative and should be treated as such by other scholars. They are 'I saw this' not 'this is generalizably true'. They can (and often are) replicated or countered but they are not per se research as you are thinking about it. Whether it is fictitious or not, other scholars should be cautious in citing them as proof/evidence in papers that fit into the 'research' mold.

      • From a legal perspective, journal article authors can implement this by following the official HHS guidance for de-identification. This applies to any use of protected health information (PHI), not just case reports.

        https://www.hhs.gov/hipaa/for-professionals/special-topics/d...

        The IRB for a particular organization can impose additional restrictions.

  • They had access to ChatGPT for last 25 years!
  • Too late, it's already in the bloodstream, LLMs will be recommending things to pediatric doctors and families from fabricated archives for years, probably.
    • That's a serious issue: How could retractions work with LLMs? How could they be made to work?

      Accuracy rots over time, and at varying rates. It's not just scientific research.

    • It’s all an hallucination.
  • I don't mind the fact that the case reports were fictional -- actual cases can be problematic in terms of privacy as it may be easy to ascertain the patient's identity from the details -- but not putting a notice that it was fictional (or altered from a real case for privacy), for teaching purposes, is pretty bad.
  • “Pics or it didn’t happen,” goes a long way in my book.
    • You may want to update that, given recent advances in generative AI.

      No idea what you should update to, mind you, but the old era of photographic evidence is on its last jpgs.

  • https://onlinelibrary.wiley.com/doi/10.1111/jpc.14206

    Maybe we should revisit the routine practice of infant male genital mutilation?

  • The detail that makes this more than a labeling error: the fictional nature appeared in the journal's author guidelines, not in the published articles. Researchers who cited these 61 papers had no way to distinguish them from genuine case reports. 218 citations later, the fiction is embedded in secondary analyses and literature reviews written by people who had no idea.

    The "Baby Boy Blue" (2010) case is the clearest example of the harm. An infant allegedly exposed to opioids through breast milk. That case influenced clinical guidance on codeine safety in nursing for years. The CARE guidelines (Consensus-based Clinical Case Reporting Guidelines) exist specifically to create transparency in case reporting. They're voluntary, which is how a journal can run a 25-year undisclosed fiction program and technically say the authors knew.

    • Doesn't sound like these works were "full" articles, but rather something more like short review articles.
  • I think research should be assumed fiction until it’s peer reviewed.
    • There is not good evidence that peer review improves quality and there is perhaps some to the contrary (many predatory journals are peer reviewed). The arxiv (unreviewed) is among the most reliable sources available.
      • Yeah, it's almost like science is better when the scientific method is applied to everything, instead of delegating validation to some third party based on credentials or authority or social status.
      • What do you suggest instead? Certainly not giving up I hope.
    • I think it's a bit different considering the goal was a teaching tool of well recognised conditions

      >all or almost all were cases of very well recognized conditions [...] where a single case report would not generate any interest or ever be cited.

    • That is an ironic proviso given that the article clearly states

      "The peer-reviewed articles don’t state anywhere the cases described are fictional."

      Peer review by peers who are trained by non-replicable science is not helpful...

    • Independently replicated. Reviewed says pretty much nothing.
      • Peer review is a sniff test. It cannot guarantee that the results are correct and the conclusions are right. It is just designed to limit some kinds of errors. Replication is important.
      • Case studies can't be replicated. They aren't experiments.
        • you can find multiple cases that are comparable. one case study is an anecdote. multiple studies for the same kind of case...
      • Tough to replicate an isolated case study?