Limits...
Piloting an approach to rapid and automated assessment of a new research initiative: Application to the National Cancer Institute's Provocative Questions initiative.

Hsu ER, Williams DE, Dijoseph LG, Schnell JD, Finstad SL, Lee JS, Greenspan EJ, Corrigan JG - Res Eval (2013)

Bottom Line: Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing.Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements.We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.

View Article: PubMed Central - PubMed

Affiliation: Office of Science Planning and Assessment, National Cancer Institute, Bethesda, MD 20892, USA Thomson Reuters, Rockville, MD 20850, USA and Center for Strategic Scientific Initiatives, National Cancer Institute, Bethesda, MD 20892, USA.

ABSTRACT
Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency's portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessment and gap analysis tools leveraging administrative and grant application data that can be used for early and continued analysis. We piloted such methods on the National Cancer Institute's Provocative Questions (PQ) initiative to address key questions regarding diversity of applicants; whether applicants were proposing new avenues of research; and whether grant applications were filling portfolio gaps. For the latter two questions, we defined measurements called focus shift and relevance, respectively, based on text similarity scoring. We demonstrate that two types of applicants were attracted by the PQs at rates greater than or on par with the general National Cancer Institute applicant pool: those with clinical degrees and new investigators. Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing. Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements. We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.

No MeSH data available.


Related in: MedlinePlus

Box plots of PQ application focus shift versus previous NIH general applications.
© Copyright Policy
Related In: Results  -  Collection


getmorefigures.php?uid=PMC3814301&req=5

rvt024-F5: Box plots of PQ application focus shift versus previous NIH general applications.

Mentions: Of the 754 PQ applications, 39 (5.2%) were classified as shifted in focus relative to the by-self previous subset by the focus-shift measurement and 271 (35.9%) were classified as shifted in focus relative to the general subset by the focus shift measurement. Box plots of the by-self and general forms of the focus shift measurement for all PQ applications are shown in Figs 4 and 5, respectively. The portion of the distribution to the right of 0.53 represents applications that had a shift in focus relative to the previous applications for each question.Figure 4.


Piloting an approach to rapid and automated assessment of a new research initiative: Application to the National Cancer Institute's Provocative Questions initiative.

Hsu ER, Williams DE, Dijoseph LG, Schnell JD, Finstad SL, Lee JS, Greenspan EJ, Corrigan JG - Res Eval (2013)

Box plots of PQ application focus shift versus previous NIH general applications.
© Copyright Policy
Related In: Results  -  Collection

Show All Figures
getmorefigures.php?uid=PMC3814301&req=5

rvt024-F5: Box plots of PQ application focus shift versus previous NIH general applications.
Mentions: Of the 754 PQ applications, 39 (5.2%) were classified as shifted in focus relative to the by-self previous subset by the focus-shift measurement and 271 (35.9%) were classified as shifted in focus relative to the general subset by the focus shift measurement. Box plots of the by-self and general forms of the focus shift measurement for all PQ applications are shown in Figs 4 and 5, respectively. The portion of the distribution to the right of 0.53 represents applications that had a shift in focus relative to the previous applications for each question.Figure 4.

Bottom Line: Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing.Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements.We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.

View Article: PubMed Central - PubMed

Affiliation: Office of Science Planning and Assessment, National Cancer Institute, Bethesda, MD 20892, USA Thomson Reuters, Rockville, MD 20850, USA and Center for Strategic Scientific Initiatives, National Cancer Institute, Bethesda, MD 20892, USA.

ABSTRACT
Funders of biomedical research are often challenged to understand how a new funding initiative fits within the agency's portfolio and the larger research community. While traditional assessment relies on retrospective review by subject matter experts, it is now feasible to design portfolio assessment and gap analysis tools leveraging administrative and grant application data that can be used for early and continued analysis. We piloted such methods on the National Cancer Institute's Provocative Questions (PQ) initiative to address key questions regarding diversity of applicants; whether applicants were proposing new avenues of research; and whether grant applications were filling portfolio gaps. For the latter two questions, we defined measurements called focus shift and relevance, respectively, based on text similarity scoring. We demonstrate that two types of applicants were attracted by the PQs at rates greater than or on par with the general National Cancer Institute applicant pool: those with clinical degrees and new investigators. Focus shift scores tended to be relatively low, with applicants not straying far from previous research, but the majority of applications were found to be relevant to the PQ the application was addressing. Sensitivity to comparison text and inability to distinguish subtle scientific nuances are the primary limitations of our automated approaches based on text similarity, potentially biasing relevance and focus shift measurements. We also discuss potential uses of the relevance and focus shift measures including the design of outcome evaluations, though further experimentation and refinement are needed for a fuller understanding of these measures before broad application.

No MeSH data available.


Related in: MedlinePlus