Jump to content

NYT article -- meta-analysis -- effectiveness of ADs


Rosetta

Recommended Posts

https://www.nytimes.com/2018/03/12/upshot/do-antidepressants-work.html?rref=collection%2Fsectioncollection%2Fscience&action=click&contentCollection=science&region=stream&module=stream_unit&version=latest&contentPlacement=6&pgtype=sectionfront

 

Q: Do antidepressants work?

A: Eh . . . maybe short term, then again maybe not.  No one is sure because drug companies fund all the research, hide the negative results and only publish the "positive" results.  If we want answers the patients will have to find them on their own.  Oh, maybe "ask your doctor." Riiiight!

https://www.survivingantidepressants.org/topic/16629-rosetta-ct-may-2011-too-fast-taper-feb-2017/?page=25

2001-2011 Celexa 10 mg raised to 40 mg then 60 mg over this time period

May 2011 OB Doctor's Cold switch Celexa 60 mg to 10 mg Zoloft sertraline (baby born)

2012-2016 - Doctors raised dose of Zoloft up to 150 mg

2016 - Xanax prescribed - as needed - 0.5 mg about every 3 days (bad reaction)

2016 - Stopped Xanax

Late 2016- Began (too fast) taper of Zoloft

Early 2017 - Trazodone prescribed for bedtime (doseage unknown)

Feb 2017 - Completed taper/stopped Trazodone

Drug free since Feb 2017

2017 - Unisom otc very rarely for sleep

Link to comment
Share on other sites

  • 2 weeks later...

"recently, the most comprehensive antidepressants study to date was published, and it appears to be a thorough effort to overcome the hurdles of the past"...by focusing on the very same studies said to be misrepresentative, biased, and inadequate!  this news article is a little confusing to me.  it starts by saying criticism has been lodged due to the poor construction of clinical trials and how their data are shared, and then says this meta-analysis is an improvement over old ones despite relying on the same flawed clinical trials and data-withholding schemes--the very same trials as previous meta-analyses, plus newer trials with similar flaws but, according to the authors of the paper itself, reporting noticeably less favorable outcomes than the premarketing RCTs.  so, the studies where most trials showed a failure to best placebo + studies with even less impressive average results = antidepressants are actually more effective than previous meta-analyses had suggested!  mathematics!

 

for a bit more background, many of the unpublished trials they added were not antidepressant vs placebo and thus do not speak to efficacy as we would normally discuss it--particularly since no reliable and enduring difference has been found between individual antidepressants rather than contexts of use.  i couldnt find exact figures and paused my calculations after a third of the first 30/101 trials marked as unpublished had no placebo arms.  only 432 out of the 522 total studies were included in the response analysis portion, as well.  while more data was added all around, the perceptive power of this meta-analysis is not really a revamping of previous methodologies.  a blog article pointed out that this new meta-analysis actually found a marginally smaller effect size than some of the previous, also prominent meta-analyses (including the kirsch one with unpublished trials): http://blogs.discovermagazine.com/neuroskeptic/2018/02/24/about-antidepressant-study/    given the unscientific and muddled nature of depression scoring in clinical trials, what this means for actual patient experiences is not really addressed (especially since these trials generally tabulate antidepressant withdrawal symptoms as nocebo effects or placebo inefficacy anyhow).

 

the new meta-analysis defined acceptability by dropout rates but didnt include studies with dropout rates so high that the studies were stopped or too incompletely reported to be included in the analysis. harms, deaths, and clinical outcomes dont matter as much as whether someone sticks with a few weeks of drug use (potentially with forms of reimbursement), in this model of analysis.  the authors do acknowledge that considering the risk of adverse effects, including withdrawal states, is a significant part of responsible prescribing and that their meta-analysis essentially does not intend to speak to whether antidepressants are actually effective or helpful to use in the real world.  omitting many of the trials with the highest rates of patient dropouts is hardly the best way to measure for drug 'acceptability', but i can understand the need to keep things tidy in a meta-analysis by excluding non-uniformizable input.  many news outlets have seemingly failed to do service to the actual caveats framing the new meta-analysis, as dr ioannidis is quoted as explaining in the article you linked.

 

more than 1/3 of the trials included in the meta-analysis involved the use of psychiatric drugs added the antidepressants being studied--"typically benzodiazepines or other sedative hypnotics", according to the paper.  these are, practically speaking, primarily used to decrease dropouts due to serious or unbearable antidepressant effects (boosting drug acceptability) and address problems, such as insomnia, that the antidepressants were either causing or not helping with (boosting drug efficacy).  indeed, the meta-analysis says that "larger all-cause dropout rates were associated with a lower response to treatment" in studies comparing antidepressants to placebo where patients had an equal chance of being put on either option.  while explained in the paper as a potential underestimation of antidepressant efficacy in certain interpretive models, it may also mean that efforts to keep patients active in a trial can yield higher reported efficacies for antidepressants than studies for the same drugs in the same kinds of patients where more significant changes are not made to keep those patients using antidepressants regardless of the effects experienced.

 

the news article mentions how "the authors went an extra step and asked for unpublished data on the studies they found, getting it for more than half of the included trials."  does that mean almost half the studies (48%) they included were incomplete in ways that drug companies and other vested interest parties preferred for them to be incomplete, or does that mean that they did not seek unpublished data for the other 48%?  perhaps some combination?  it was not clear from the text of the meta-analysis.  the supplementary does say that they requested both published and unpublished data "for all studies", and their flowchart even shows what happened to studies which were not included in the analysis.  for instance, 35 trials they wanted to include had no response rates in their published or unpublished forms.  so...i guess they are saying that 48% of the total trials (522) involved people holding out on them or some other obstacle to obtaining more of the trial data.  the tables in the supplementary do look kind of shabby in some of the categories it would be important to have information on, like the use of 'rescue medications', treatment setting, and whether there was a placebo run-in.

 

interestingly, though not of direct relevance, less than 10% of the included trials had published and unpublished response rates that matched--the rest were inconsistent, incomplete, or unavailable.  being studious, the authors of the meta-analysis opted to input the unpublished data over the published data when conflicts between them arose.  however, the authors also mentioned preferring more data to more accurate data.  if a study was published in multiple articles and was missing corrections in any unpublished data retrieved, they chose the article with more information rather than with a more coherent presentation (not that they could necessarily distinguish without the full dataset, and more than 40% of the included trials were stated to have incomplete unpublished information pertaining to response rates).  we have seen what that does to reports on things like antidepressant-induced suicidality.  marketing the illusion of transparency and completeness is more dangerous than the mere lack of transparency and completeness unto itself.  but, as ive said, these systemic problems are more about our failure to hold higher standards for the running and reporting of trials.  while a lot of mainstream publications shy away from criticism, or even balanced reviews, you cant much polish a turd, either.

from 2005-2012, i spent 7 years taking 17 different psychotropic medications covering several classes.  i would be taking 3-7 medications at a time, and 6 out of the 17 medications listed below were maxed or overmaxed in clinical dosage before i moved on to trying the next unhelpful cocktail.
 
antidepressants (SSRIs, SNRIs, NDRIs, tetracyclics): zoloft, wellbutrin, effexor, lexapro, prozac, cymbalta, remeron
antipsychotics (atypical): abilify, zyprexa, risperdal, geodon
sleep aids (benzos, off-label antidepressants & antipsychotics, hypnotics): seroquel, temazepam, trazodone, ambien
anxiolytics: buspar
anticonvulsants: topamax
 
i tapered off all psychotropics from late 2011 through early 2013, one by one.  since quitting, ive been cycling through severe, disabling withdrawal symptoms spanning the gamut of the serious, less serious, and rather worrisome side effects of these assorted medications.  previous cross-tapering and medication or dosage changes had also caused undiagnosed withdrawal symptoms.
 
brainpan addlepation

Link to comment
Share on other sites

×
×
  • Create New...

Important Information

Terms of Use Privacy Policy