There is a need for extensive standardisation and clear accountability - but also for an acknowledgment that variations do occur and can be accounted for, says Dr Janet Woodcock. The FDA deputy commissioner and chief medical officer was addressing a recent workshop in Washington on 'Defining and Implementing Quality in Clinical Investigations from Design to Completion', held jointly by the Drug Information Association (DIA) and the FDA's Office of Critical Path Programs. The workshop was part of the Human Subject Protection and Bioresearch Monitoring Initiative, announced by the FDA in June 2006 and focusing largely on data quality and the oversight of institutional review boards (IRBs). The initiative comes under the broader umbrella of the US agency's Critical Path programme, which aims to modernise and streamline medical product development. The context for improving quality throughout clinical research was that, while the process generally produced high-quality data, much of the evidence needed to support modern evidence-based medicine never came through, Woodcock noted. This was the legacy of an "extremely inefficient and therefore expensive" system, in which regulatory burdens and the lack of a stable infrastructure limited the number of clinical questions that could be pursued. Fraud was rare, Woodcock claimed, but when it occurred it might go undetected for some time, tarnishing the reputation of "the research enterprise" and eroding trust when it was eventually discovered. At the same time, the whole clinical research environment had changed dramatically, with the emergence of multicentre and international trials, increasing study complexity, the swelling ranks of the different parties involved in clinical trials, and a more intense focus on evidence development, both from government agencies and healthcare payers. But the onus for quality assurance was not just on the regulator, Woodcock warned. Due to its resource limitations, the FDA could provide only an audit function. Quality was a "systems function" that had to be built in; it could not be tested or inspected into a product. An appropriate definition of quality in clinical trials would be "fitness for use", Woodcock suggested - in other words, the FDA relied on the data to make regulatory decisions, sponsors to support their claims about the product, and physicians or patients to make treatment choices. She cited a definition of data quality from an Institute of Medicine workshop: that it should be good enough for a decision not to change if completely accurate data were used. All the same, Woodcock added, the ultimate goal was not "no defect", rather "acceptable levels of variation". These variations might occur because of problems with the conduct of the trial, poor recordkeeping or flawed procedures. What was important was to define prospectively, and using a risk-based approach, what was an acceptable level of variation that would not affect reliability. Woodcock gave a number of examples of areas in which quality could be incorporated more forcefully into the clinical trial process, such as standardising protocol development, case report forms, data capture/transport and study content (e.g., standardised trial endpoints and adverse outcome measures). A number of these issues are being addressed by the FDA through regulatory initiatives, such as final guidance on the use of computerised systems in trials, draft guidance on adverse reaction reporting to IRBs, and a proposed rule on the falsification of trial data. But bringing clinical trials up to date had to be a collaborative endeavour, Woodcock stressed - one whose success was critical not only to the FDA and industry, but to all stakeholders including patients, academic investigators and the general public.