Tip: See the about page for updates on new features and overall design.
Help Contents (DRAFT ver 0.9)
This tutorial is designed to help with the use of PICO.vet, the tool, as well as provide guidance on using a PICOTT+ approach to building a focused clinical question.
Our general recommendation is to start broadly and then iteratively refine your queries further if needed. For example, unless your question involves a widely researched problem or you seek information about specific breed risks, limit the use of detailed patient attributes (e.g. species, breed, sex) or publication types that are relatively scarce in veterinary medicine when compared to human medicine (e.g. clinical guidelines).
The Focused Clinical Question
Clinical questions are of two basic types: background questions and foreground questions.
Background questions are general questions. They are common among veterinary students or clinicians early in their career when learning foundational aspects of a disease or condition. They are typically structured with one of the six or seven question components used in problem solving (e.g., who, what, where, when, why, or how).
"How is FIV transmitted?"
"What is the predominant erythrocyte morphology in lead poisoning?"
Foreground questions are more specific and refined and are usually triggered by a clinical problem arising directly from patient care. These are the types of questions formulated by more experienced clinicians or experts. They are inherently bound to the context of the patient, the environment surrounding her condition, any existing or planned interventions (e.g., therapy, diagnostic tests, exposures, etc.), a comparison with alternatives and a focus on one or more outcomes. They may include components relating to probabilistic (e.g. a likelihood ratio) outcomes rather than deterministic ones and where prior and posterior probabilities are continuously refined as more information is gathered. There is therefore a temporal aspect to question refinement.
Foreground questions have a prevailing formalism — a comparison of two related interventions.
Is methylprednisolone (Solu-Medrol®) an effective treatment for myelopathy?
In my canine patient, what is the probability of clinical improvement with parenteral methylprednisilone for acute traumatic myelopathy in the first 24 hours after injurywhen compared with alternative therapy or placebo. What are the risks for adverse events and what is their frequency?
Can fluoroquinolone antibiotics cause blindness in cats?
What is the likelihood of sudden onset blindness in clinically healthy cats treated with enrofloxacin greater than 5 mg/kg/daycompared with those treated at lower doses? Is there sound evidence of an exact etiology and mechanism for the blindness and is there an expected period of recovery or a return to former capacity?
Master (Template) Question
"In patients with similar signalment and clinical signs to mine, how do the attributes of one intervention compare with attributes of another alternative intervention and what is my target outcome?"
The default selection is "none". If two or more species are selected, they are joined with the "AND" operator. If there is interest in creating form logic that allows more complex queries, using a mix of AND, OR and NOT, please let us know. "Clear ALL" resets to "none" by deselecting all species. Use this option judiciously because it may constrain your search too narrowly.
This is a free text entry so be sure to spell breed(s) correctly. Again, we recommend judicious use of all options in the patient group because it may limit your search results.
Establishing one or more active problems is an essential precondition to building a query of the biomedical literature. It's a process that may also be the least understood.
A structured clinical question specializes your inquiry to your patient. It should segment and orient the question space into a coherent set of discrete concepts that are soundly unequivocal — especially to a search engine. We want search terms to be contextually precise by being as non-vague, non-ambiguous and non-redundant as possible. Complex clinical concepts are inherently compositional because they are composed of several atomic concepts (via "post-coordination" in informatics parlance). For example the concept "diabetes mellitus" may have attributes that provide additional semantic qualification, such as "chronic"; or "osteoarthritis pain" as "severe".
As clinical questions obviously evolve in parallel during the course of a diagnostic evaluation, therapy and re-evaluation, so does their degree of abstraction or certainty. Discrete observations are noted (e.g. vital signs or the values from a hemogram). They are then aggregated into one or more findings that are more clinically important or relevant. A constellation of findings are then grouped toward an actionable problem or health concern. Even when the initial encounter reveals pathognomonic signs (e.g. a fracture) that eliminate the need to iteratively refine a list of differential diagnoses, patient context will still yield some uncertainty and therefore the possibility of eliciting one or more clinical questions (e.g. "what is the best surgical approach for the fracture?").
A epistemological framework was developed by Evans and Gadd2 (Figure 1) that provides a hierarchy for how medical knowledge is characterized and how it may be used in problem solving. This is included here to emphasize that active problems (also referred to as facets) are at a level "used by researchers in medical artificial intelligence to describe the partitioning of a problem space...they are interim hypotheses that serve to divide the information in the problem into sets of manageable sub-problems and to suggest possible solutions."1.
Even if we know our patient has diabetes mellitus for example, our patient's unique array of active problems is the source of our clinical actions. It is mostly at the level of the problem where we build concepts into contextually focused questions. This is especially important if this information is captured timely and accurately (with sufficient semantic precision) in an electronic health record system because it enables automatic knowledge retrieval from background tasks our EHRs can build for us (see Context-Aware Knowledge Retrieval or Infobuttons).
The following summarizes each level in the epistemological framework, starting with observations at the bottom and system level knowledge at the top:
Observations are discrete, relatively atomic concepts we use to gather information about our patient. They are potentially relevant, but typically are not clinically valuable alone. Aggregations of related observations are combined to yield a finding.
[Hyperkalemia] (finding): K+=7.1 mEq/L (observation) + reference range (observation) + test sensitivity/specificity (observation)
Findings are comprised of aggregations of observations, such as a laboratory result plus its reference range and its test accuracy (e.g. sensitivity and specificity). They are an indication that the information is clinically useful or important and may lead (with other findings and observations) to the initiation of a significant actionable problem.
Active problems are clinically important indicators of an underlying pathology that usually requires some action (and worth noting separately in a problem list), often consisting of rather complex clinical concepts.
[Hyperkalemia, peracute, life threatening] (problem): hyperkalemia, severe (finding) + urethral obstruction (problem)
Diseases and Conditions
Diseases and conditions are obviously important search parameters, but they alone can be relatively abstract concepts that may not convey patient details that are sufficient for an effective intervention. The utility of a question to a particular patient (consider background vs. foreground questions) depends on concept granularity. When combined with more finely granular information about a patient (e.g. active problems, comorbidities), queries tend to be more relevant. However in veterinary medicine, the relative paucity of good prospective literature (when compared with human medicine) may require a greater degree of abstraction with broader queries emphasizing sensitivity over specificity, in order to achieve good results.
A health concern6 is any health related matter that is of interest to someone, including both the clinician as well as the client. It's something that should monitored, tracked, scheduled for follow-up or managed. It's used primarily in health record systems and involves two subconcepts:
First, the observations, findings, problems, diagnoses, or risks that trigger the concern.
Second, it typically involves invoking a health concern tracker within an EHR. The tracker may be based on evidence from a prevention guideline or a clinical algorithm that makes goal or milestone recommendations that require future testing or patient monitoring.
A key to a PICO (PICOTT+) question is the use of a comparison. If your question is about therapy, such as a drug or procedure, include a relevant comparison. That comparison could be a placebo, or a drug within the same pharmacologic class. For example, when comparing the efficacy of two cephalosporin antibiotics for canine pyoderma secondary to atopic dermatitis. Interventions can be many things, including therapeutics (e.g. drugs and procedures), exposures (a carcinogen or toxin), prevention strategies (e.g. immunizations), a diagnostic test and so on.
The primary, target intervention of interest.
A relevant comparison intervention (e.g. similar class or category of therapeutic, or even a placebo), or a gold standard test or prevailing procedure (e.g. comparing surgical approaches for the treatment of cranial cruciate ligament injuries), etc.
Outcomes include clinical goals (both clinician and client directed), therapeutic endpoints or consequences (e.g. an adverse event). They are not often considered when asking clinical questions or even well documented in patient care, but they are important because they can be aligned to key clinical trial objectives. Outcomes therefore help you determine the relevance of a given (prospective) study to your particular patient.
Currently, question types are derived from prebuilt clinical methodology filters developed at the National Library of Medicine by Haynes et. al. for use in their Clinical Queries interface for PubMed. These have not been fully evaluated for veterinary medicine. Because the foundation query for PICO.vet constrains results to veterinary medicine, any question type selected is added to the query. Question types include etiology, diagnosis, therapy, and disease prevention. This is an especially important area for the veterinary library community to review and improve upon. See the bottom of this page for a link to submit issues, requests or suggestions to improve the search methodology.
Publication types include prospective research and reviews (e.g. systematic reviews, clinical trials, randomized controlled trials, meta-analysis, review articles and practice guidelines).
Date limits include no limit, 1-year, 3-years or 5-years. If you have suggestions about other options, please let us know.
Open Access Filter
This uses the following PubMed filter to constrain results to full-text or open access articles:
pmc cc license[filter]
Evidence-Based Medicine Filter
This is a boolean selector that toggles this filter ON or OFF (default), using the following MeSH term:
Figure 1: Epistemological framework representing the structure of medical knowledge for problem solving; in A Primer on Aspects of Cognition for Medical Informatics; Vimla L. Patel, PhD, DSc, José F. Arocha, PhD, and David R. Kaufman, PhD; J Am Med Inform Assoc. 2001 Jul-Aug; 8(4): 324–343.
Evans DA, Gadd CS. Managing coherence and context in medical problem-solving discourse. In: Evans DA, Patel VL (eds). Cognitive Science in Medicine: Biomedical Modeling. Cambridge, Mass.: MIT Press, 1989:211–55. | Link to PDF
Chandrasekeran B, Smith JW, Sticklen J. Deep models and their relation to diagnosis. Artif Intell Med. 1989;1:29–40 | Link to PDF