Controversial GMO-rat tumor study republished (UPDATE: Not peer reviewed)

Recall that really, really awful paper that was published about genetically modified maize, Monsanto and Round-up weed killer? You know, the unethical one with the tumor-laden rats which was eventually retracted?

Well, here is another example of YOU CAN GET ANYTHING PUBISHED. It’s been published in another journal. It includes a new analysis.

Retracted Seralini GMO-rat study republished | Retraction Watch.

Retraction Watch learned yesterday, however, that Environmental Sciences Europe — a journal where Seralini has published before — was the journal publishing the new version. The journal, part of SpringerOpen, is too young to have an official Impact Factor (IF). Using the same calculation, however, the journal would have an IF of .55. That would place it about 190th out of the 210 journals in the “environmental sciences” category at Thomson Scientific. (For comparison, Food and Chemical Toxicology has an IF of just above 3, and a ranking of 27th.)

This is hardly the first time that the authors of a retracted paper have republished it. In a recent case, they did so in the same journal. But in a more typical case, they republished the work in another journal, with a lower IF.

The republished study was peer-reviewed, according to the press materials, and Seralini confirmed that it was in an email to Retraction Watch.

The authors of the study did not declare that there is a book and film about the incident in their disclosure. The study mentions tumors but not cancer, though cancer was used in media ploys.

Scummy “science” got even scummier.

Pathological science.

UPDATE: (27-Jun-2014) The editor of the new journal admitted that it was not peer-reviewed by his reviewers since that was already done the first time. Lame. Seralini is still saying it was. This study is just horrendous. As noted at Neurologica, this is a small, poor study that is politically motivated – most insidious stuff.


  12 comments for “Controversial GMO-rat tumor study republished (UPDATE: Not peer reviewed)

  1. James G
    June 24, 2014 at 4:36 PM

    They published the original paper in conjunction with a book and film deal. I don’t know how they can get away with not describing that as a conflict of interest. Shameful behaviour, and another black eye on the peer review process.

    It sounds like the journal is defending itself by claiming it wants to encourage debate, much like the demons and psychosis publisher did. The debate has already been held. The research is bad science.

    What is it going to take to get these publishers to take responsibility for quality control? There needs to be consequences for accepting material like this. There should also be a means to censure people like Seralini and Wakefield who deceitfully use their research to line their pockets. Science has to start taking care of unethical behaviour, or it will continue to lose the confidence of the public. It will devolve into a contest of opinion versus opinion, instead of truth versus myth.

  2. Lucky Lester
    June 24, 2014 at 5:25 PM

    Here are a couple of interesting links in regards this matter:

    I would be interested in hearing some of your responses.

  3. June 24, 2014 at 6:37 PM

    Replicate. That’s my response. There are so many problems with this study it’s not even funny.

  4. James G
    June 25, 2014 at 1:12 AM

    Here are a few thoughts on the second link –

    “The counter argument is that these were two entirely different sorts of studies. Yes so? The main difference between the studies was that Séralini’s study was more extensive. Monsanto’s study lasted only 90 days while Séralini’s lasted 2 years.”

    The probability of a tumour developing in an organism is a function of time. As the organism ages genetic changers accumulate until one cell is disregulated enough that it can form a tumour. That’s why cancer is unusual among children, but common among seniors.

    “a longer study tends to be more accurate since toxicity indicators like cancer are usually a slow process”

    Not necessarily. I think you could argue it can introduce more noise into the results, especially when it comes to tumourigenesis, which is why long term studies benefit from larger sample sizes.

    “So what if that strain is more likely to develop tumors? What Séralini found was that rats fed Monsanto’s Roundup-tolerant GM maize NK603 had very high levels of cancer and died earlier than the control group. How in any way is it relevant that the rats used are more susceptible to tumours?”

    Again, using rats prone to tumour development is an issue in long term studies when you are tracking the development of tumours. One of the objections raised was that they misinterpreted some of the tumours, including some that could only have arisen in a rat embryo, and thus before the rats were entered into the study. These pre-existing tumours were wrongly attributed to GMOs. When you are only dealing with ten rats, two tumours are significant.

    “Monsanto in comparison did not state that their control diet was non-GM in their 90-day feeding trial data. So Séralini’s study used proper control diets as stipulated by EU GMO legislation whereas Monsanto used irrelevant control diets”

    The paper quoted in Seralini’s study clearly states that GMO portion of the diet was compared against six other non-GMO supplements.

    “Séralini observed high rates of cancer in the GMO maize fed rats and reported that observation as any good scientist would do. Was he expected to ignore observations that did not conform to what he expected to find?”

    He observed high rates of cancer in a rat strain known to suffer from high rates of cancer. He did not offer any statistical evidence that the rates seen in any group were significantly different from any other. If you analyzed the incidence of tumours in all groups and found they weren’t significant, you should probably make that clear. Instead he doesn’t provide any statistical analysis, but chooses to include pictures of rats with massive tumours. That’s sensationalism, not responsible science.

    I should add that one of the biggest criticisms in the many comments I’ve read is that some of these rats should have been euthanized well before their tumours progressed to the stage pictured. I wonder though, if someone’s intent was to push an anti-GMO agenda, wouldn’t it be in their best interest to make sure the rats were as sick and the tumours as big as possible?

    “The sample size argument is also entirely irrelevant. Séralini followed the toxicity part of OECD protocol no. 453 as he should have. This protocol states that you must use a minimum of 10 rats of each sex per test group.”

    But the sample size is relevant. Bigger sample sizes provide more certain results. At least one analysis was able to show that the mortality results were probably not meaningful. Simply because some protocal insists you have at LEAST a certain number of rats doesn’t make your data any more or less useful.

    “It is also true that Séralini’s results are inconclusive because of the rat strain and the small sample size. But so are Monsanto’s results. Studies are retracted over serious errors or fraud not inconclusiveness.”

    I thought the sample size was irrelevant. The study was retracted because it presented some of the results as conclusive. Figure 5 and table 3 both describe a 99% confidence interval. There have been many questions regarding irregularities or errors in the histology and pathology findings as well as the design of the experiment.

    “What he is exposing is an extremely disturbing bias.”

    He exposes an extremely common and justified bias. When you present data that goes against everything that has been learned so far in a field as widely studied as GMO foods, you can expect the scientific community will crawl into all the dark corners of your paper with a microscope to see if it’s legit.

    Don’t expect an easy ride when you overturn the apple cart. A while back some physicists published data they felt confirmed the signature of gravity waves in the CMB. You don’t hear them wining about the massive scrutiny their paper got. Extraordinary claims require extraordinary proof. Or any proof.

    On a happier note, I KNEW that I would find the name of my lab instructor from VIU, Robert Wager in one of the letters complaining about bad science. I guess I should reveal that as a possible source for bias. I can assure you from my experience that he is not a pawn of the food-industrial complex. He always struck me as very passionate about science and agronomy. 😀

    He was one of the authors of this letter –

    And here is the paper in question, and the letters the journal published in response. All of it is freely available, and probably a lot clearer than what I’ve written here –

  5. Andrew W
    June 25, 2014 at 2:23 AM

    All I can say is, it’s been a bad year for rats…

  6. 'Lucky' Lester
    June 25, 2014 at 2:45 AM

    Good point; replicate. A cornerstone of good science.

    Which is why I find it strange that according to OECD protocols, industry only has to do one set of tests to gain approval in any OECD member country. I find it worrying that in this case a single 90 day test done by the company seeking approval is deemed sufficient evidence to show safety, especially in the case of a radical new technology with huge implications on how and what food is produced, who controls it and with the possibility of unforeseen environmental effects.
    (it reminds me of a recent ‘doubtful’ thread title: ‘Lets ask the dealers if their stuff is good’)

    My research has suggested that the biggest criticisms of this study have been that it did not follow OECD protocols for a carcinogenicity study.
    Perhaps that would be because they were testing for chronic toxicity and as such followed the protocols for that type of study.
    The increased incidence of cancer in the study group was an unexpected result. If you compare Seralini’s study with Monsanto’s 90-day study (which proved safety), it was more extensive, ran for longer, had a greater number of tests groups and a greater range of parameters were measured.

    I have also come across reference in several places to Monsanto only analysing half of the sample tested. I also understand that Monsanto’s data is unavailable. Very odd.

    The other big criticism is the type of rats used by Seralini, specifically that they were a strain more prone to cancer. Given that the control group was exactly the same strain, I don’t understand how this casts doubts on his findings.
    Also data from the Ramazzini Institute in Italy shows that far from being unusually “prone” to tumours, the SD rat is an excellent model for human carcinogenicity and is about as prone to tumours, both “spontaneous” and environmentally induced, as we are.

    ‘The key point about Seralini’s tumour findings was that the controls got some tumours, but that the treated groups got significantly more tumours, which began sooner and were more aggressive, than the control groups. This argument is explained here:

    Would anyone care to respond to the points above which seem to me to be fair rebuttal of the two main things supposedly wrong with Seralini’s study?

    Sharon, you state that ‘There are so many problems with this study it’s not even funny.’ Would you kindly indicate what you see to be some of the other main ones?

  7. 'Lucky' Lester
    June 25, 2014 at 3:26 AM

    Good points. Especially the one about sample size. Ten rats seems an extremely small sample.
    Given that this is such an important, controversial and emotional issue can we afford not to replicate the tests (Monsanto’s as well as Santini’s) with larger samples by independent scientists?

  8. June 25, 2014 at 7:57 AM

    I have noted the other problems in the listed posts. Please stop linking to anti-GMO propaganda sites. See our comment policy.

  9. Chris Howard
    June 25, 2014 at 8:18 AM

    But on the upside cancer’s doing really well…

    Too soon?

  10. Jon O
    June 25, 2014 at 12:08 PM

    No doubt the initial paper was poorly reviewed and shouldn’t have been published in the state it was in. However, I disagreed with it’s politically-motivated retraction. Retraction should be reserved for plagiarism, data falsification, and mislabeling/interpretation mistakes. The editorial committee retracting it showed a remarkable spinelessness. Instead, the journal should have invited rebuttal studies.

    Bad paper + good rebuttal is much more valuable to the scientific record than just rebuttals showing negative treatment responses.

  11. Lucky Lester
    June 25, 2014 at 5:13 PM

    My apologies. I provided the links to indicate my sources, especially when I copied text.

    As a chronic skeptic, I tend to question consensus view as well as alternate, and these days (as always) it can be difficult to distinguish between an opposing view and propaganda.

    Please delete the links and my comments if you feel that they may stimulate unnecessary debate on this topic.

Comments are closed.