Frekil Logo
Future of Evidence

RCTs Tell You If a Drug Can Work. RWE Tells You If It Will.

NT

Nikhil Tiwari

·9 min read

Drug development still runs on a model built for control. A molecule moves from discovery into preclinical testing, then into Phase I, II, and III trials, and sometimes into post-approval follow-up after launch. That process gave medicine its gold standard: the randomized controlled trial. But the question patients, physicians, payers, and even regulators increasingly need answered is not only whether a drug can work under ideal conditions. It is whether it will work in routine care, across messy populations, over time.

That gap is why real-world evidence matters. RWE does not replace randomized trials. It complements them. It helps teams understand disease burden earlier, design smarter studies, measure what happens after launch, and make decisions that are closer to how medicine is actually practiced.

Why the gold standard still leaves gaps

Randomized trials became the gold standard for a reason. They use randomization to reduce confounding, pre-specified analysis plans to avoid post hoc storytelling, and close follow-up to monitor adherence and safety. In many trials, blinding further protects the integrity of the results. All of that gives RCTs strong internal validity. If the trial is well-run, you can be more confident that the treatment caused the observed outcome.

But that rigor comes with tradeoffs. Trial populations are often highly selected. Patients may be younger, healthier, more adherent, and less medically complex than the people who will eventually receive the drug in everyday practice. Concomitant medications are tightly managed. Follow-up is fixed. Care is delivered by investigators working under protocol, not by ordinary clinicians managing crowded clinics and imperfect workflows.

That is why RCTs often tell you about efficacy while the market ultimately cares about effectiveness. A drug can perform well in a controlled study and still behave differently once age, comorbidities, adherence problems, switching, and real treatment variation enter the picture.

RCTs are built to answer whether a treatment works on average under controlled conditions. Real-world evidence asks what happens when that treatment meets actual practice.

There is another gap. Traditional trials are not especially good at answering questions about long-term value. Payers want to know what happens to hospitalizations, downstream utilization, total cost of care, and quality of life in the population they actually cover. Clinicians want to know which patients are most likely to benefit. Patients want to know what treatment is right for the next person walking through the door, not just what happened to the average participant in a study.

RWD is the raw material. RWE is the conclusion.

Real-world data is, in the FDA's definition, health data collected outside the constraints of the classical randomized trial. It can come from electronic health records, claims and billing systems, product and disease registries, outpatient or in-home monitoring, and increasingly from digital tools such as wearables and patient-reported systems. Some of it is structured. Much of it is not.

Real-world evidence is what you get when that raw material is turned into a credible answer. That distinction matters. A large database is not evidence by itself. An EHR extract is not evidence. Evidence appears only when the research question is clear, the data source is fit for purpose, the cohort and comparator are well-defined, and the study design and analysis are strong enough to support an inference.

This is why RWE can be incredibly useful or deeply misleading. The same messiness that makes real-world data representative also makes it noisy, heterogeneous, and vulnerable to bias. Good RWE is never just a data volume story. It is a question-design-analysis story.

Data does not become evidence just because it is large. It becomes evidence when it is fit for the decision being made.

What RWE adds that trials usually cannot

The first thing RWE adds is context. It can show how a disease behaves in the real world, how large the affected population really is, how patients move through lines of therapy, and where the burden of illness shows up across care settings. That is useful long before a registration trial begins. It improves trial feasibility, comparator selection, endpoint choice, and protocol design.

It also adds breadth. Because RWD often reflects heterogeneous populations, RWE can help teams understand performance in patients who look more like reality: older adults, people with comorbidities, patients on multiple medications, and populations that would have been excluded or underrepresented in the pivotal study.

Then there is time. Some questions only become visible after approval, when exposure grows and follow-up stretches out. Rare adverse events, long-term safety signals, treatment persistence, and comparative effectiveness in routine care are often better studied through post-market real-world evidence than through tightly bounded pre-approval trials.

And in some cases, RWE changes the design itself. Pragmatic clinical trials bring randomization into routine practice settings. External controls use historical trial data or real-world data to construct comparison arms when placebo is impractical or unethical. Hybrid designs combine clinical effectiveness and implementation questions in the same program. The boundary between trial evidence and real-world evidence is getting more porous.

Why regulators and payers are leaning in

The industry has been pushed in this direction by economics as much as by science. Biopharma R&D is expensive, slow, and failure-prone. At the same time, health systems are under pressure to justify cost against benefit, especially for high-priced therapies, small populations, and products approved on surrogate endpoints or single-arm studies. That combination creates demand for evidence that is both faster and more decision-relevant.

That is part of the reason governments and agencies have spent the last decade creating more room for RWE. In the United States, the 21st Century Cures Act pushed the FDA to evaluate how real-world evidence could support new indications and post-approval requirements. In Europe, tools like conditional marketing authorisation and adaptive pathways created space to pair earlier access with ongoing evidence generation from real-world use.

The important signal is not that regulators have lowered the bar. It is that they are increasingly willing to ask a more useful question: when can a well-designed study using real-world data answer a regulatory or reimbursement question credibly enough to matter?

What separates useful RWE from noise

The answer starts with data quality, but it does not end there. Fit-for-purpose data, transparent definitions, sensible comparators, careful handling of missingness, and robust statistical methods are the foundation. In an RCT, bias is controlled primarily through the design and conduct of the trial. In RWE, bias is controlled much more heavily through the analysis. That raises the bar for methodological discipline.

Technology helps, but only if it is used correctly. Natural language processing can pull clinical signals from free text. AI and machine learning can help structure unstructured data and surface patterns at scale. Common data models like OMOP can make multi-source analysis more consistent. But none of these tools rescue a bad question or a weak design. They make good evidence generation faster; they do not make weak evidence trustworthy.

The teams that do this well build RWE as a capability, not as a one-off study. They invest in data access, analytical rigor, and cross-functional judgment so they can move from hypothesis to evidence without treating each study like a bespoke consulting project.

The future is not RCT versus RWE

The right mental model is not a fight between randomized trials and real-world evidence. It is an evidence stack. RCTs remain essential for causal clarity. RWE adds relevance, scale, continuity, and practical decision support. One tells you whether a treatment can work under controlled conditions. The other helps you understand what happens when that treatment enters the real world.

That is the shift more teams are waking up to. The question is no longer whether RWE belongs in drug development. It is whether your organization can generate decision-grade evidence quickly enough to matter when the next regulatory, clinical, or access decision arrives.