Ayuda
Ir al contenido

Dialnet


Constant effect in randomized clinical trials with quantitative outcome. A methodological review

  • Autores: Jordi Cortés Martínez
  • Directores de la Tesis: Erik Cobo (dir. tes.), José Antonio González Alastrué (codir. tes.)
  • Lectura: En la Universitat Politècnica de Catalunya (UPC) ( España ) en 2021
  • Idioma: español
  • Materias:
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • The past decade has seen continuous growth in so-called precision medicine, due especially to great advances in the genetics. While applying it presently goes unquestioned in certain fields like oncology, it is more controversia! in other medical specialties that usually practice it. Precision medicine is justified under two assumptions. First, it must be more cost-effective than the universal standard of care, as a world with limited resources requires that an individual treatment's benefits be inversely related to the number of people on whom it is effective. Second, and most importantly, the intervention under study should actually show different responses among patients or subgroups of them, which this work focuses on. Strictly speaking, the fundamental problem of causal inference makes the latter requirement impossible to prove, because a conventional trial observes patient outcome only under a single treatment. However, the variability of a continuous outcome provides important information about the presence (or absence) of a constant treatment effect, of which a direct consequence is that outcome variance remains unchanged under different treatment regimens. Thus, homoscedasticity may be a useful tool for testing the hypothesis of a homogeneous effect.

      Our work here conducts a methodological review of randomized clinical triaIs (RCT) with two ·treatment arms and a quantitative primary end point. Among other variables, we collected the outcome and baseline variances for each treatment group with two purposes: to quantify the outcome variance ratio between the experimental and reference groups; and to estimate the proportion of studies with variance discrepancies large enough to be attributed to a heterogeneous treatment effect among participants. This variance comparison was carried out between treatment arms (independent by randomization) and overtime, contrasting the end-of-study and baseline outcomes.

      The Medline database provided us 208 randomized clinical trials fulfilling the eligibility criteria and published in the years 2004, 2007, 2010 and 2013. A random effects model was used to estimate the variance ratios (experimental to reference), of which the mean was 0.89, 95% CI from 0,81 to 0,97. Thus, contrary to popular belief, the point estimate indicate that the experimental treatments reduce the variability of patient response by 11%. The experimental group's variance ratio (final to baseline) in the comparison over time was 0,86, 95% CI from 0,76 to 0,98, meaning lower variability at the end of the study. This analysis provides no statistical evidence to justify ruling out a constant intervention effect on our target population in four out of five studies (80,3%, 95% CI from 74,1 to 85,3%). This percentage barely changed in four sensitivity analyses with percentage point estimates ranging from 79,8 to 90,0%. Among the studies that we found evidence of a non-constant intervention effect, the experimental group showed 7,2% and 12,5%, respectively, greater and lower outcome variance than the reference arm. The high number of studies with lower variability in the experimental group can be explained by the ceiling and floor effects of sorne measurement scales, which generally group patients at one of the scale boundaries in cases of highly effective interventions.

      This work aims to show that comparing variances provides evidence on whether ar not precision medicine is a sensible choice for a specific treatment. When both arms have equal variances, a simple interpretation is that the treatment effect is constant. lf true, searching for any predictors of a differential response is futile. This means that the average treatment effect can be viewed as an individual treatment effect, which justifies using a single clinical guideline for all patients fulfilling the eligibility criteria. This in turn supports using parallel controlled trials to guide decision-making in these circumstances.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno