http://www.clinfowiki.org/wiki/api.php?action=feedcontributions&user=TheoBiblio&feedformat=atomClinfowiki - User contributions [en]2024-03-29T14:41:32ZUser contributionsMediaWiki 1.22.4http://www.clinfowiki.org/wiki/index.php/Terms_related_to_privacy,_confidentiality,_and_securityTerms related to privacy, confidentiality, and security2015-11-21T20:03:31Z<p>TheoBiblio: /* Important terms related to patient privacy, data confidentiality, and security */</p>
<hr />
<div>== Important terms related to patient privacy, data confidentiality, and security ==<br />
<br />
* [[Acceptable Use Policy (AUP)]]<br />
* [[Access control]]<br />
* [[Administrative Safeguards]]<br />
* [[Anonymization of data]]<br />
* [[Antivirus program]]<br />
* [[ARRA]]<br />
* [[Attestation]]<br />
* [[Audit trail]]<br />
* [[Authentication]]<br />
* [[Authorization ]]<br />
* [[Autonomy]]<br />
* [[Audit trails]]<br />
* [[Availability]]<br />
* [[Backup]]<br />
* [[Biometrics]]<br />
* [[Black hat hacker]]<br />
* [[Blacklisting]]<br />
* [[Break Glass]]<br />
* [[Business Associates]]<br />
* [[Business continuity]]<br />
* [[Business escrow]]<br />
* [[Certificates]]<br />
* [[Co-mingled records]]<br />
* [[Confidentiality]]<br />
* [[Contingency Plan]]<br />
* [[Cookies]]<br />
* [[Covered Entities]]<br />
* [[Cryptographic Checksum]]<br />
* [[Data Breach]]<br />
* [[Data confidentiality]]<br />
* [[Data re-identification]]<br />
* [[Data security]]<br />
* [[Data integrity]]<br />
* [[Data Governance]]<br />
* [[Data Retention]]<br />
* [[Data Use Agreement]]<br />
* [[Decrypting]]<br />
* [[De-Identified Data]]<br />
* [[Digital Signature]]<br />
* [[Disaster Recovery Plan]]<br />
* [[Disclosure]]<br />
* [[Emancipated Minor]]<br />
* [[FHIR]]<br />
* [[Identifiable Health Data]]<br />
* [[Identity]]<br />
* [[Institutional Review Board (IRB)]]<br />
* [[Electronic Signature]]<br />
* [[Encryption]]<br />
* [[FERPA]]<br />
* [[Firewall]]<br />
* [[FTP (File Transfer Protocol)]]<br />
* [[Genetic Information]]<br />
* [[Grey Hat Hacker]]<br />
* [[Hacker]]<br />
* [[Health Insurance Portability and Accountability Act (HIPAA)]]<br />
* [[Hot Sites]]<br />
* [[HTTPS protocol]]<br />
* [[In loco parentis]]<br />
* [[Information Security Officer (ISO)]]<br />
* [[Integrity]]<br />
* [[JavaScript]]<br />
* [[Limited Data Set ]]<br />
* [[Malware]]<br />
* [[Malicious Software]]<br />
* [[Marketing]]<br />
* [[Medical Record Number]]<br />
* [[Minimum Necessary]]<br />
* [[Mission Critical]]<br />
* [[Non-repudiation]]<br />
* [[Password]]<br />
* [[Password change policy]]<br />
* [[Patient privacy]]<br />
* [[Patient Safety and Quality Improvement Act]]<br />
* [[Personal identifiers]]<br />
* [[Personally identifiable data]]<br />
* [[Phishing]]<br />
* [[Pretty Good Privacy]]<br />
* [[Privacy]]<br />
* [[Private key]]<br />
* [[Protected Health Information (PHI)]]<br />
* [[Provider Identity Theft]]<br />
* [[Proxy access]]<br />
* [[Proxy server]]<br />
* [[Psychotherapy Notes]]<br />
* [[Public key]]<br />
* [[Remote login]]<br />
* [[Right to Privacy]]<br />
* [[Risk Assessment]]<br />
* [[Risk analysis]]<br />
* [[Risk mitigation]]<br />
* [[Role-based access]]<br />
* [[Rootkit]]<br />
* [[Secure FTP]]<br />
* [[Secure Multipurpose Internet Mail Extensions]]<br />
* [[Secure Sockets Layer]]<br />
* [[Security audit]]<br />
* [[Security flaw]]<br />
* [[Security Policy]]<br />
* [[Security Rule]]<br />
* [[Spoofing]]<br />
* [[System Assessment]]<br />
* [[System Security]]<br />
* [[SSH]]<br />
* [[TCP/IP]]<br />
* [[Temporal Key Integrity Protocol (TKIP)]]<br />
* [[Trojan horse]]<br />
* [[Transport Layer Security]]<br />
* [[Treatment, Payment and Operation (TPO)]]<br />
* [[Two-factor authentication]]<br />
* [[Unemancipated Minor]]<br />
* [[Virtual Private Network]]<br />
* [[Virus]]<br />
* [[White hat hacker]]<br />
* [[Wi-Fi Protected Access (WPA)]]<br />
* [[Wired Equivalent Privacy (WEP)]]<br />
* [[Worm]]<br />
<br />
<br />
[[Category: Definition]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/MarketingMarketing2015-11-21T20:01:08Z<p>TheoBiblio: Created page with "This information is taken from the U.S. Department of Health & Human Services Health Information Privacy website. <ref name="HIPPA-Mktg">OCR HIPAA Privacy. "Marketing." N.p., ..."</p>
<hr />
<div>This information is taken from the U.S. Department of Health & Human Services Health Information Privacy website. <ref name="HIPPA-Mktg">OCR HIPAA Privacy. "Marketing." N.p., 3 Apr. 2003. Web. <http://www.hhs.gov/ocr/privacy/hipaa/understanding/coveredentities/marketing.html>.</ref><br />
==Background==<br />
The HIPAA Privacy Rule gives individuals important controls over whether and how their protected health information is used and disclosed for marketing purposes. With limited exceptions, the Rule requires an individual’s written authorization before a use or disclosure of his or her protected health information can be made for marketing. So as not to interfere with core health care functions, the Rule distinguishes marketing communications from those communications about goods and services that are essential for quality health care. <br />
<br />
==How the Rule Works==<br />
The Privacy Rule addresses the use and disclosure of protected health information for marketing purposes by: <br />
<br />
*Defining what is “marketing” under the Rule; <br />
*Excepting from that definition certain treatment or health care operations activities; <br />
*Requiring individual authorization for all uses or disclosures of protected health information for marketing purposes with limited exceptions. <br />
<br />
==What is “Marketing”?==<br />
The Privacy Rule defines “marketing” as making “a communication about a product or service that encourages recipients of the communication to purchase or use the product or service.” Generally, if the communication is “marketing,” then the communication can occur only if the covered entity first obtains an individual’s “authorization.” This definition of marketing has certain exceptions, as discussed below. Examples of “marketing” communications requiring prior authorization are: <br />
<br />
*A communication from a hospital informing former patients about a cardiac facility, that is not part of the hospital, that can provide a baseline EKG for $39, when the communication is not for the purpose of providing treatment advice. <br />
*A communication from a health insurer promoting a home and casualty insurance product offered by the same company.<br />
<br />
==What Else is “Marketing”?==<br />
Marketing also means: “An arrangement between a covered entity and any other entity whereby the covered entity discloses protected health information to the other entity, in exchange for direct or indirect remuneration, for the other entity or its affiliate to make a communication about its own product or service that encourages recipients of the communication to purchase or use that product or service.” This part of the definition to marketing has no exceptions. The individual must authorize these marketing communications before they can occur. Simply put, a covered entity may not sell protected health information to a business associate or any other third party for that party’s own purposes. Moreover, covered entities may not sell lists of patients or enrollees to third parties without obtaining authorization from each person on the list. For example, it is “marketing” when: <br />
<br />
*A health plan sells a list of its members to a company that sells blood glucose monitors, which intends to send the plan’s members brochures on the benefits of purchasing and using the monitors. <br />
*A drug manufacturer receives a list of patients from a covered health care provider and provides remuneration, then uses that list to send discount coupons for a new anti-depressant medication directly to the patients. <br />
<br />
==What is NOT “Marketing”?==<br />
<br />
The Privacy Rule carves out exceptions to the definition of marketing under the following three categories: <br />
<br />
(1) A communication is not “marketing” if it is made to describe a health-related product or service (or payment for such product or service) that is provided by, or included in a plan of benefits of, the covered entity making the communication, including communications about: <br />
<br />
*The entities participating in a health care provider network or health plan network; < Replacement of, or enhancements to, a health plan; and <br />
*Health-related products or services available only to a health plan enrollee that add value to, but are not part of, a plan of benefits. <br />
<br />
This exception to the marketing definition permits communications by a covered entity about its own products or services. For example, under this exception, it is not “marketing” when: <br />
<br />
*A hospital uses its patient list to announce the arrival of a new specialty group (e.g., orthopedic) or the acquisition of new equipment (e.g., x-ray machine or magnetic resonance image machine) through a general mailing or publication. <br />
*A health plan sends a mailing to subscribers approaching Medicare eligible age with materials describing its Medicare supplemental plan and an application form. <br />
<br />
(2) A communication is not “marketing” if it is made for treatment of the individual. For example, under this exception, it is not “marketing” when: <br />
<br />
*A pharmacy or other health care provider mails prescription refill reminders to patients, or contracts with a mail house to do so. <br />
*A primary care physician refers an individual to a specialist for a follow-up test or provides free samples of a prescription drug to a patient. <br />
<br />
(3) A communication is not “marketing” if it is made for case management or care coordination for the individual, or to direct or recommend alternative treatments, therapies, health care providers, or settings of care to the individual. For example, under this exception, it is not “marketing” when: <br />
<br />
*An endocrinologist shares a patient’s medical record with several behavior management programs to determine which program best suits the ongoing needs of the individual patient.<br />
*A hospital social worker shares medical record information with various nursing homes in the course of recommending that the patient be transferred from a hospital bed to a nursing home. <br />
<br />
For any of the three exceptions to the definition of marketing, the activity must otherwise be permissible under the Privacy Rule, and a covered entity may use a business associate to make the communication. As with any disclosure to a business associate, the covered entity must obtain the business associate’s agreement to use the protected health information only for the communication activities of the covered entity. <br />
<br />
==Marketing Authorizations and When Authorizations are NOT Necessary==<br />
Except as discussed below, any communication that meets the definition of marketing is not permitted, unless the covered entity obtains an individual’s authorization. To determine what constitutes an acceptable “authorization,” see 45 CFR 164.508. If the marketing involves direct or indirect remuneration to the covered entity from a third party, the authorization must state that such remuneration is involved. See 45 CFR 164.508(a)(3). A communication does not require an authorization, even if it is marketing, if it is in the form of a face-to-face communication made by a covered entity to an individual; or a promotional gift of nominal value provided by the covered entity. For example, no prior authorization is necessary when: <br />
<br />
*A hospital provides a free package of formula and other baby products to new mothers as they leave the maternity ward. <br />
*An insurance agent sells a health insurance policy in person to a customer and proceeds to also market a casualty and life insurance policy as well. <br />
<br />
== References ==<br />
<references/></div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Evaluating_healthcare_quality_using_natural_language_processingEvaluating healthcare quality using natural language processing2015-11-17T18:15:28Z<p>TheoBiblio: /* Related Articles */</p>
<hr />
<div>A Review of Baldwin KB. Evaluating healthcare quality using natural language processing. J Healthcare Qual 2008; 30: 24-29.<br />
<br />
==Introduction==<br />
<br />
One of the most pervasive trends in modern healthcare is the focus on [[Quality informatics|quality assessment]]. The [[Health care quality process measures|methodology for routine quality assessment]] is in its infancy. A large amount of healthcare data is available only in narrative formats (histories and physicals, progress notes, discharge summaries, physician orders) and evaluating such data for adverse events or failure to utilize evidence-based diagnostic/therapeutic measures is time consuming and labor intensive. [[Natural language processing (NLP)|Natural language processing]] is a possible technique to extract relevant structured data from narrative data.<br />
<br />
==Methods==<br />
<br />
[[EMR|EHRs]] for 60 woman, 40 years of age or older, who visited a large academic medical center in 2001 were taken as the study sample. These EHRs were studied for a period of 2.5 years. The following variables were extracted, using manual coding and natural language processing: demographic variables (age, race, gender), risk factors for breast cancer (positive genetic screening, positive family history, personal history of cancer, breast complaints such as pain, masses, discharge), provider variables (screening and test results or follow-up), and mortality. Natural language processing was accomplished with the NUD*IST software package. Natural language processing data extraction was compared to manual extraction on the basis of efficiency (time required), false-positive rate, precision (the ratio of the number of documents for which desired information was retrieved to the total number of documents for which any information was retrieved), false-negative rate, and recall (ratio of number of documents retrieved with desired information to total number of documents in database with desired information).<br />
<br />
==Results==<br />
<br />
Natural language processing was more efficient than manual data extraction and the difference was statistically significant. Both techniques retrieved race and gender with 100% frequency. Risk factors in the study population were too rare to merit comparison. There was not a statistically significant difference between the two techniques in the extraction of provider orders for follow-up. There was a statistically significant false-negative rate (21/60) for the detection of screening measures by natural language processing. The overall recall rate of natural language processing was only 0.293 while the precision was 0.709.<br />
<br />
==Discussion==<br />
<br />
Natural language processing, as implemented with NU*DIST, was less time-consuming than manual data extraction and performed well retrieving variables that were consistently defined , such as gender, race, and test results. Other variables were not captured as well. The challenging variables, as would be expected, were those characterized by greater provider terminology variability. This led to an overall poor recall rate. The poor recall rate may reflect more the specific software package, NU*DIST, developed for qualitative research, than the concept. Improvements in the EHR may improve the effectiveness of natural language processing.<br />
<br />
==Comments==<br />
<br />
[[Natural language processing (NLP)]] is a promising tool to reconcile the clinician’s propensity for narrative data with the advantages of structured data in medical informatics. However, in this particular study and with this particular software package, natural language processing was relatively efficient and precision was excellent, but recall of some variables was disappointing. The challenge is the variability, context-sensitivity, and inconsistency of medical terminology. (For those who are interested, NUD*IST or Non numerical Unstructured Data Indexing Searching and Theorizing, is a qualitative data research tool developed by QSR International, which is, according to Wikipedia, the world’s largest qualitative research software developer).<br />
<br />
James Bailey<br />
<br />
==Related Articles==<br />
[[Use of a support vector machine for categorizing free-text notes]]<br />
<br />
[[Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records]]<br />
<br />
[[Category: New Technology]]<br />
[[Category: Interface, Usability and Accessibility]]<br />
[[Category: Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Evaluating_healthcare_quality_using_natural_language_processingEvaluating healthcare quality using natural language processing2015-11-17T18:15:13Z<p>TheoBiblio: </p>
<hr />
<div>A Review of Baldwin KB. Evaluating healthcare quality using natural language processing. J Healthcare Qual 2008; 30: 24-29.<br />
<br />
==Introduction==<br />
<br />
One of the most pervasive trends in modern healthcare is the focus on [[Quality informatics|quality assessment]]. The [[Health care quality process measures|methodology for routine quality assessment]] is in its infancy. A large amount of healthcare data is available only in narrative formats (histories and physicals, progress notes, discharge summaries, physician orders) and evaluating such data for adverse events or failure to utilize evidence-based diagnostic/therapeutic measures is time consuming and labor intensive. [[Natural language processing (NLP)|Natural language processing]] is a possible technique to extract relevant structured data from narrative data.<br />
<br />
==Methods==<br />
<br />
[[EMR|EHRs]] for 60 woman, 40 years of age or older, who visited a large academic medical center in 2001 were taken as the study sample. These EHRs were studied for a period of 2.5 years. The following variables were extracted, using manual coding and natural language processing: demographic variables (age, race, gender), risk factors for breast cancer (positive genetic screening, positive family history, personal history of cancer, breast complaints such as pain, masses, discharge), provider variables (screening and test results or follow-up), and mortality. Natural language processing was accomplished with the NUD*IST software package. Natural language processing data extraction was compared to manual extraction on the basis of efficiency (time required), false-positive rate, precision (the ratio of the number of documents for which desired information was retrieved to the total number of documents for which any information was retrieved), false-negative rate, and recall (ratio of number of documents retrieved with desired information to total number of documents in database with desired information).<br />
<br />
==Results==<br />
<br />
Natural language processing was more efficient than manual data extraction and the difference was statistically significant. Both techniques retrieved race and gender with 100% frequency. Risk factors in the study population were too rare to merit comparison. There was not a statistically significant difference between the two techniques in the extraction of provider orders for follow-up. There was a statistically significant false-negative rate (21/60) for the detection of screening measures by natural language processing. The overall recall rate of natural language processing was only 0.293 while the precision was 0.709.<br />
<br />
==Discussion==<br />
<br />
Natural language processing, as implemented with NU*DIST, was less time-consuming than manual data extraction and performed well retrieving variables that were consistently defined , such as gender, race, and test results. Other variables were not captured as well. The challenging variables, as would be expected, were those characterized by greater provider terminology variability. This led to an overall poor recall rate. The poor recall rate may reflect more the specific software package, NU*DIST, developed for qualitative research, than the concept. Improvements in the EHR may improve the effectiveness of natural language processing.<br />
<br />
==Comments==<br />
<br />
[[Natural language processing (NLP)]] is a promising tool to reconcile the clinician’s propensity for narrative data with the advantages of structured data in medical informatics. However, in this particular study and with this particular software package, natural language processing was relatively efficient and precision was excellent, but recall of some variables was disappointing. The challenge is the variability, context-sensitivity, and inconsistency of medical terminology. (For those who are interested, NUD*IST or Non numerical Unstructured Data Indexing Searching and Theorizing, is a qualitative data research tool developed by QSR International, which is, according to Wikipedia, the world’s largest qualitative research software developer).<br />
<br />
James Bailey<br />
<br />
==Related Articles==<br />
[[Use of a support vector machine for categorizing free-text notes]]<br />
[[Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records]]<br />
<br />
[[Category: New Technology]]<br />
[[Category: Interface, Usability and Accessibility]]<br />
[[Category: Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Accessing_primary_care_Big_Data:_the_development_of_a_software_algorithm_to_explore_the_rich_content_of_consultation_recordsAccessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records2015-11-17T18:08:55Z<p>TheoBiblio: /* Objective */</p>
<hr />
<div>This is a brief review of the article "Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records."<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref><br />
<br />
==Abstract<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref>==<br />
===Objective===<br />
To develop a [[natural language processing (NLP)]] software inference algorithm to classify the content of primary care consultations using electronic health record ([[EMR|EHR]]) Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care.<br />
<br />
===Design===<br />
Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation.<br />
<br />
===Setting===<br />
Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between January 1, 2008, and December 31, 2013, for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three "gold standard" sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm.<br />
<br />
===Outcome Measures===<br />
Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgments of expert clinicians within the 1200 record gold standard validation set.<br />
<br />
===Results===<br />
The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections).<br />
<br />
===Conclusions===<br />
A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilization. The methodology can also be applied to other areas of clinical care.<br />
<br />
==Comments==<br />
The large data set along with the large number of "gold standard" classifications support the very encouraging results from this fairly straightforward NLP algorithm. Sensitivity and specificity were especially impressive given the wide-range of writing styles among provider notes analyzed. Although retrospective classification as described in this study is both important and relevant, the authors' comments regarding the algorithm being integrated "into future versions of EHR software so that appropriate classification codes are suggested to clinicians in real time, thereby improving the quality and completeness of diagnostic coding" is especially exciting, given the wide range of possible clinical decision support [[CDS]] options available with concurrent classification.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Accessing_primary_care_Big_Data:_the_development_of_a_software_algorithm_to_explore_the_rich_content_of_consultation_recordsAccessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records2015-11-17T18:07:01Z<p>TheoBiblio: /* Comments */</p>
<hr />
<div>This is a brief review of the article "Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records."<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref><br />
<br />
==Abstract<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref>==<br />
===Objective===<br />
To develop a [[natural language processing (NLP)]] software inference algorithm to classify the content of primary care consultations using electronic health record [[EMR|EHR]] Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care.<br />
<br />
===Design===<br />
Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation.<br />
<br />
===Setting===<br />
Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between January 1, 2008, and December 31, 2013, for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three "gold standard" sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm.<br />
<br />
===Outcome Measures===<br />
Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgments of expert clinicians within the 1200 record gold standard validation set.<br />
<br />
===Results===<br />
The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections).<br />
<br />
===Conclusions===<br />
A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilization. The methodology can also be applied to other areas of clinical care.<br />
<br />
==Comments==<br />
The large data set along with the large number of "gold standard" classifications support the very encouraging results from this fairly straightforward NLP algorithm. Sensitivity and specificity were especially impressive given the wide-range of writing styles among provider notes analyzed. Although retrospective classification as described in this study is both important and relevant, the authors' comments regarding the algorithm being integrated "into future versions of EHR software so that appropriate classification codes are suggested to clinicians in real time, thereby improving the quality and completeness of diagnostic coding" is especially exciting, given the wide range of possible clinical decision support [[CDS]] options available with concurrent classification.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Accessing_primary_care_Big_Data:_the_development_of_a_software_algorithm_to_explore_the_rich_content_of_consultation_recordsAccessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records2015-11-17T18:06:47Z<p>TheoBiblio: </p>
<hr />
<div>This is a brief review of the article "Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records."<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref><br />
<br />
==Abstract<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref>==<br />
===Objective===<br />
To develop a [[natural language processing (NLP)]] software inference algorithm to classify the content of primary care consultations using electronic health record [[EMR|EHR]] Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care.<br />
<br />
===Design===<br />
Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation.<br />
<br />
===Setting===<br />
Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between January 1, 2008, and December 31, 2013, for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three "gold standard" sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm.<br />
<br />
===Outcome Measures===<br />
Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgments of expert clinicians within the 1200 record gold standard validation set.<br />
<br />
===Results===<br />
The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections).<br />
<br />
===Conclusions===<br />
A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilization. The methodology can also be applied to other areas of clinical care.<br />
<br />
==Comments==<br />
The large data set along with the large number of "gold standard" classifications support the very encouraging results from this fairly straightforward NLP algorithm. Sensitivity and specificity were especially impressive given the wide-range of writing styles among provider notes analyzed. Although retrospective classification as described in this study is both important and relevent, the authors' comments regarding the algorithm being integrated "into future versions of EHR software so that appropriate classification codes are suggested to clinicians in real time, thereby improving the quality and completeness of diagnostic coding" is especially exciting, given the wide range of possible clinical decision support [[CDS]] options available with concurrent classification.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Accessing_primary_care_Big_Data:_the_development_of_a_software_algorithm_to_explore_the_rich_content_of_consultation_recordsAccessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records2015-11-17T17:57:37Z<p>TheoBiblio: </p>
<hr />
<div>This is a brief review of the article "Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records."<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref><br />
<br />
==Abstract<ref name="Macrae2015">Macrae J, Darlow B, Mcbain L, et al. Accessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records. BMJ Open. 2015;5(8):e008160. http://bmjopen.bmj.com.ezproxyhost.library.tmc.edu/content/5/8/e008160.long.</ref>==<br />
===Objective===<br />
To develop a [[natural language processing (NLP)]] software inference algorithm to classify the content of primary care consultations using electronic health record [[EMR|EHR]] Big Data and subsequently test the algorithm's ability to estimate the prevalence and burden of childhood respiratory illness in primary care.<br />
<br />
===Design===<br />
Algorithm development and validation study. To classify consultations, the algorithm is designed to interrogate clinical narrative entered as free text, diagnostic (Read) codes created and medications prescribed on the day of the consultation.<br />
<br />
===Setting===<br />
Thirty-six consenting primary care practices from a mixed urban and semirural region of New Zealand. Three independent sets of 1200 child consultation records were randomly extracted from a data set of all general practitioner consultations in participating practices between January 1, 2008, and December 31, 2013, for children under 18 years of age (n=754,242). Each consultation record within these sets was independently classified by two expert clinicians as respiratory or non-respiratory, and subclassified according to respiratory diagnostic categories to create three 'gold standard' sets of classified records. These three gold standard record sets were used to train, test and validate the algorithm.<br />
<br />
===Outcome Measures===<br />
Sensitivity, specificity, positive predictive value and F-measure were calculated to illustrate the algorithm's ability to replicate judgments of expert clinicians within the 1200 record gold standard validation set.<br />
<br />
===Results===<br />
The algorithm was able to identify respiratory consultations in the 1200 record validation set with a sensitivity of 0.72 (95% CI 0.67 to 0.78) and a specificity of 0.95 (95% CI 0.93 to 0.98). The positive predictive value of algorithm respiratory classification was 0.93 (95% CI 0.89 to 0.97). The positive predictive value of the algorithm classifying consultations as being related to specific respiratory diagnostic categories ranged from 0.68 (95% CI 0.40 to 1.00; other respiratory conditions) to 0.91 (95% CI 0.79 to 1.00; throat infections).<br />
<br />
===Conclusions===<br />
A software inference algorithm that uses primary care Big Data can accurately classify the content of clinical consultations. This algorithm will enable accurate estimation of the prevalence of childhood respiratory illness in primary care and resultant service utilization. The methodology can also be applied to other areas of clinical care.<br />
<br />
==Comments==<br />
IN PROGRESS<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Accessing_primary_care_Big_Data:_the_development_of_a_software_algorithm_to_explore_the_rich_content_of_consultation_recordsAccessing primary care Big Data: the development of a software algorithm to explore the rich content of consultation records2015-11-17T17:41:12Z<p>TheoBiblio: Created page with "IN PROGRESS"</p>
<hr />
<div>IN PROGRESS</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Implementing_culture_change_in_health_care:_theory_and_practiceImplementing culture change in health care: theory and practice2015-11-17T17:29:48Z<p>TheoBiblio: </p>
<hr />
<div>== First Review ==<br />
<br />
The following is an article review by Scott et al., ''Implementing culture change in health care: theory and practice.”<ref name="Change"> Scott, T, Mannion, R, Davies, H, Marshall, M.(2003). Implementing culture change in health care: theory and practice. International Journal for Quality in Health Care 15, no. 2.<br />
http://intqhc.oxfordjournals.org/content/15/2/111.short></ref><br />
<br />
===Summary===<br />
The article was based on a review on which focused on discussions concerning “organizational culture and culture change within health care organizations in the [[UK|UK]].”<br />
<br />
===Methodology===<br />
<br />
The methods used for this article consisted of a literature review that consisted of “theoretical contributions” and results from published studies. In addition, feedback from 30 healthcare experts in the UK and USA were included.<br />
<br />
===Results===<br />
<br />
The definition of organizational culture was varied within the scholar community. The development of “culture change models” were confirmed. A list of key factors that impact culture change were identified:<br />
*Inadequate or inappropriate leadership;<br />
*Constraints imposed by external stakeholders and professional allegiances;<br />
*Perceived lack of ownership;<br />
*Subcultural diversity within health care organizations and systems.<br />
<br />
===Conclusion / Summary of Key Points=== <br />
Organizational culture changes need to be taken into account when restructuring health care. It is important to understand theory during this process. In summary, the following points were concluded:<br />
*Organizational culture is complex,<br />
*Importance of distinguishing between different types of subcultures,<br />
*Crucial role of leadership,<br />
*Common barriers to culture change.<br />
In addition, it was noted that planning for culture change may not yield expected results. <br />
<br />
===Comments=== <br />
The article emphasized the difficulty of planning for organizational change and the challenges presented when changing culture. This also applies to current projects underway within the healthcare industry. More teams are realizing that even though projects involve technical aspects, a big portion of the projects deal with the implementation of organizational/culture change. <br />
<br />
===Related Articles===<br />
*[[EHR_implementation:_one_organization's_road_to_success]]<br />
<br />
== Second Review ==<br />
Add next review here.<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category: Reviews]]<br />
[[Category: EHR implementation project]]<br />
[[Category: Methodologies and Frameworks]]<br />
[[Category: HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Computerised_physician_order_entry-related_medication_errors:_analysis_of_reported_errors_and_vulnerability_testing_of_current_systemsComputerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems2015-11-17T17:27:08Z<p>TheoBiblio: </p>
<hr />
<div>==Abstract==<br />
*IMPORTANCE:Medication computerised provider order entry (CPOE) has been shown to decrease errors and is being widely adopted. However, CPOE also has potential for introducing or contributing to errors.<br />
*OBJECTIVES:The objectives of this study are to (a) analyse medication error reports where CPOE was reported as a 'contributing cause' and (b) develop 'use cases' based on these reports to test vulnerability of current CPOE systems to these errors.<br />
*METHODS:A review of medication errors reported to United States Pharmacopeia MEDMARX reporting system was made, and a taxonomy was developed for CPOE-related errors. For each error we evaluated what went wrong and why and identified potential prevention strategies and recurring error scenarios. These scenarios were then used to test vulnerability of leading CPOE systems, asking typical users to enter these erroneous orders to assess the degree to which these problematic orders could be entered.<br />
*RESULTS:Between 2003 and 2010, 1.04 million medication errors were reported to MEDMARX, of which 63 040 were reported as CPOE related. A review of 10 060 CPOE-related cases was used to derive 101 codes describing what went wrong, 67 codes describing reasons why errors occurred, 73 codes describing potential prevention strategies and 21 codes describing recurring error scenarios. Ability to enter these erroneous order scenarios was tested on 13 CPOE systems at 16 sites. Overall, 298 (79.5%) of the erroneous orders were able to be entered including 100 (28.0%) being 'easily' placed, another 101 (28.3%) with only minor workarounds and no warnings.<br />
*CONCLUSIONS AND RELEVANCE:Medication error reports provide valuable information for understanding CPOE-related errors. Reports were useful for developing taxonomy and identifying recurring errors to which current CPOE systems are vulnerable. Enhanced monitoring, reporting and testing of CPOE systems are important to improve CPOE safety.<br />
<ref>Schiff 2015 http://www.ncbi.nlm.nih.gov/pmc/articles/PMC4392214/pdf/bmjqs-2014-003555.pdf</ref><br />
<br />
==Summary==<br />
===Introduction===<br />
Once thought to be a highly error resistant, [[CPOE|Computerised Provider Order Entry (CPOE)]] has begun to have documented errors that have not been seen before. Due to these errors, the FDASIA recommended developing approaches for error compilation. SInce 2003, when the USP error recording system was able to record CPOE errors, it has become the third leading cause of errors being submitted.It is the goal of this article to analyze data from 10,000 of these CPOE reported errors and assess vulnerabilities of leading CPOE systems.<br />
<br />
===Methods===<br />
====Phase 1====<br />
THe MEDMARX system was searched for CPOE errors from Jan 2003 to Apr 2010. 63,040 queries were collected from a total of 1.04 million reports. These reports were then reviewed and coded according to the National Coordination Council for Medication Error Reporting and Prevention.<br />
====Phase 2====<br />
A weighted scoring system was used based on error frequency, severity, generalisability, and testability. Based on these scores 21 test scenarios were choosen. The scenarios were then tested on test patients at a sample of leading vendor and homegrown CPOE systems, and the ease at which these orders were entered was recorded.<br />
<br />
===Results===<br />
====Phase 1====<br />
Of the reported errors 6.1% were caused by CPOE systems. Though of these reports the reviewers could only determine 49.8% were caused by CPOE systems with what was written in the report.<br />
====Phase 2====<br />
338 error reports were identified as potential test scenarios, but the list was narrowed to 21 using the criteria presented before. The test scenarios were tested on 13 representaive systems at 16 different test sites. Overall, 79.5% of the errors were able to be placed with 56.3% of those being easily placed, though the rest required only a few workarounds. 26.6% of the orders produced warnings, with 69% being passive and the others requiring workarounds.<br />
<br />
===Discussion===<br />
The report gives great detail into the types of errors that occur in CPOE systems, with about 56.3% of the 6% of errors occuring easily in 13 different systems. Though many of the problems are not unique to CPOEs and in fact occur in written orders as well. A problem seen is that warnings have been so toned down as to not interupt the physcians that they have in fact been causing other problems. An shockingly in one of the systems the warning had been turned off for an upgrade months prior and had not been reinstated until these tests were done. And while this report was based on self-reporting it shows that there is a need to improve the current CPOE systems as well as the need for a more standardized reporting system.<br />
<br />
===Comments===<br />
I think overall this is a very good paper, and the results would suprise most people that read it. I find that CPOE systems being in the top three error causers for medications to be crazy, and the fact that systems can have their warning turned off for months on end and still work blows my mind. This technology is here to help physicians, but when we start to rely to much on technology there is still the chances that errors will occur and that we wont even know that they are happening.<br />
<br />
I think your article encompasses what we are afraid to confront in HIT, medical errors. If the order is sent incorrectly there is a slight chance the pharmacist might catch on it. However, the CDS alerts and notifications may be able to reduce the CPOE errors. Overall, the EHR system continues to be a great challenge in order to provide safety, privacy and the ability to reduce medical errors.<br />
<br />
==References==<br />
<references><br />
<br />
[[Category: CPOE]]<br />
[[Category: Reviews]]<br />
[[Category: HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Understanding_differences_in_electronic_health_record_(EHR)_use:_linking_individual_physicians%27_perceptions_of_uncertainty_and_EHR_use_patterns_in_ambulatory_careUnderstanding differences in electronic health record (EHR) use: linking individual physicians' perceptions of uncertainty and EHR use patterns in ambulatory care2015-11-17T17:18:43Z<p>TheoBiblio: /* Introduction */</p>
<hr />
<div>==Introduction==<br />
The authors study the differences in individual physicians' [[Electronic Health Record (EHRs)|Electronic Health Record(EHR)]] use patterns and identify perceptions of uncertainty as an important new variable in understanding EHR use. <ref name ="EHR">Lanham, H. J., Sittig, D. F., Leykum, L. K., Parchman, M. L., Pugh, J. A., & McDaniel, R. R. (2014). Understanding differences in electronic health record (EHR) use: linking individual physicians' perceptions of uncertainty and EHR use patterns in ambulatory care. Journal of the American Medical Informatics Association, 21(1), 73-81. http://jamia.oxfordjournals.org/content/21/1/73.full.</ref><br />
<br />
==Abstract==<br />
This study was done by interviews and direct observation of Physicians. <br />
Area the study preformed was in an outpatient care organization.<br />
<br />
==Measurement==<br />
*Primary care physicians <br />
*Sub-specialists - endocrinologists, gastroenterologists, rheumatologists, neurologists, and podiatrists. <br />
*Spent 4 weeks in each practice interviewing and observing physicians during normal work activities<br />
HealthGroup implemented its EHR 7 years prior to data collection.<br />
<br />
==Results==<br />
Physicians' perceptions of uncertainty<br />
*Uncertain reduction<br />
*Uncertain adoption<br />
*Uncertain Hybrid<br />
<br />
==Conclusions==<br />
This study contributes new understanding of the differences in how individual physicians use an EHR. <br />
*Findings <br />
#how they perceive uncertainty <br />
#how they view the role of information in managing uncertainty as they care for patients<br />
#evidence linking individual physicians' perceptions of uncertainty with their EHR use patterns. <br />
<br />
*Implications<br />
#health IT design <br />
#Implementation initiatives <br />
<br />
==References==<br />
<references/><br />
[[Category:Reviews]]<br />
[[Category:EHR]]<br />
[[Category:HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:52:31Z<p>TheoBiblio: </p>
<hr />
<div>This is a brief review of the article "Natural language processing and inference rules as strategies for updating problem list in an electronic health record."<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref><br />
<br />
NOTE: Except for the Comments below, all other information is taken directly from the article being reviewed.<br />
==Abstract<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
The only comment I can legitimately make is that I did not realize until I found and read this article that a "published" piece with no usable information, many mistakes in grammar, and no references can be passed off as a scholarly work.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:50:54Z<p>TheoBiblio: /* Comments */</p>
<hr />
<div>This is a brief review of the article "Natural language processing and inference rules as strategies for updating problem list in an electronic health record."<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref><br />
==Abstract<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
The only comment I can legitimately make is that I did not realize until I found and read this article that a "published" piece with no usable information, many mistakes in grammar, and no references can be passed off as a scholarly work.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:40:15Z<p>TheoBiblio: </p>
<hr />
<div>This is a brief review of the article "Natural language processing and inference rules as strategies for updating problem list in an electronic health record."<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref><br />
==Abstract<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:38:14Z<p>TheoBiblio: </p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref><br />
==Abstract<ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:37:30Z<p>TheoBiblio: </p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163. http://ebooks.iospress.nl/publication/34379</ref><br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:35:17Z<p>TheoBiblio: /* Discussion */</p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163.</ref><br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:33:46Z<p>TheoBiblio: /* Background */</p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163.</ref><br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
The problem-oriented electronic health record has become one of the most developed clinical documentation systems in medical informatics. While the advantages of a problem list are known and have been published in numerous studies, physicians do not always keep the problem list accurate, complete and updated.<br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Discussion===<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:32:53Z<p>TheoBiblio: /* Conclusion */</p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163.</ref><br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
Physicians do not always keep the problem list accurate, complete and updated. <br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Discussion===<br />
<br />
===Conclusion===<br />
Natural language processing and inference rules have had acceptable results as tools for incorporating health problems into a problem list, mainly using limited sets of data. Further studies are needed to validate these rules in other areas and to extend the tools to a more comprehensively.<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Natural_language_processing_and_inference_rules_as_strategies_for_updating_problem_list_in_an_electronic_health_recordNatural language processing and inference rules as strategies for updating problem list in an electronic health record2015-11-11T23:31:50Z<p>TheoBiblio: Created page with "IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in a..."</p>
<hr />
<div>IN PROGRESS. <ref name="Plazzotta2013">Plazzotta F, Otero C, Luna D, De quiros FG. Natural language processing and inference rules as strategies for updating problem list in an electronic health record. Stud Health Technol Inform. 2013;192:1163.</ref><br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
Physicians do not always keep the problem list accurate, complete and updated. <br />
<br />
===Objective===<br />
To analyze natural language processing (NLP) techniques and inference rules as strategies to maintain completeness and accuracy of the problem list in EHRs.<br />
<br />
===Materials and Methods===<br />
Non systematic literature review in PubMed, in the last 10 years. Strategies to maintain the EHRs problem list were analyzed in two ways: inputting and removing problems from the problem list.<br />
<br />
===Results===<br />
NLP and inference rules have acceptable performance for inputting problems into the problem list. No studies using these techniques for removing problems were published Conclusion: Both tools, NLP and inference rules have had acceptable results as tools for maintain the completeness and accuracy of the problem list.<br />
<br />
===Discussion===<br />
<br />
===Conclusion===<br />
<br />
==Comments==<br />
IN PROGRESS.<br />
<br />
==Related Resources==<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/The_Symbiosis_of_Project_Management_And_Change_Management_During_Healthcare_Integrated_PlanningThe Symbiosis of Project Management And Change Management During Healthcare Integrated Planning2015-11-09T02:17:02Z<p>TheoBiblio: </p>
<hr />
<div>== First Review ==<br />
<br />
The following is an article review by Gordon et al., '' The Symbiosis of Project Management And Change Management During Healthcare Integrated Planning.”.<ref name="PM Change"><br />
Gordon, A. J., & Hornstein, H. (2014). The Symbiosis of Project Management And Change Management During Healthcare Integrated Planning. http://www.pmi.org/learning/project-change-management-healthcare-planning-1923.></ref><br />
<br />
===Summary===<br />
The article was based on a study conducted in Ontario to determine best methods for cohesive healthcare planning. The [[Change management|change management]] area was highly emphasized. The study revealed that change management should be included in the initial stages of project planning as well as during the integration stages of healthcare project development.<br />
<br />
===Methodology===<br />
<br />
A study was conducted which included interviews with ‘10 healthcare project managers’ within all regions within Canada (provincial, regional, local). The following research questions were used for the study:<br />
*What are project management success factors and best practices during healthcare integrated changes?<br />
*How might project teams consider using approaches from organizational change theory in project management?<br />
*To what extent have project management teams integrated change management with project management techniques?<br />
<br />
In addition, articles focusing on ‘project and change management processes’ were reviewed (approximately 33). <br />
<br />
===Results===<br />
The results were captured according to the three research questions noted and classified into themes: <br />
<br />
*Question 1: Theme: Project success is enabled through competency.<br />
*Question 2: Theme: Clear articulation of project benefits.<br />
*Question 3: Theme: Change management is integrated in early in the project life cycle. <br />
<br />
===Conclusion / Summary of Key Points=== <br />
A summary of key points were concluded in the article:<br />
*Project managers decided that processes are divided between project and change management;<br />
*Challenges exist in engaging change management within projects;<br />
*Project documentation was not based on change management theories;<br />
*Study highlighted the need for risk management planning;<br />
*Project benefits need to be communicated to both internal and external stakeholders;<br />
*Change projects can be steered towards uncertainty.<br />
<br />
===Comments===<br />
The article focused on reviews from experienced project managers in the health industry. The author identified gaps and suggestions for best practices that are consistent with healthcare projects in the US. As many changes in health systems are implemented, the importance on understanding and utilizing change management best practices will be a key to successful rollouts. <br />
<br />
===Related Articles===<br />
*[[EHR_implementation:_one_organization's_road_to_success]]<br />
<br />
== Second Review ==<br />
Add next review here.<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category: Reviews]]<br />
[[Category: HI5313-2015-FALL]]<br />
[[Category: EHR implementation project]]<br />
[[Category: Methodologies and Frameworks]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Understanding_keys_to_successful_implementation_of_electronic_decision_support_in_rural_hospitals:_analysis_of_a_pilot_study_for_antimicrobial_prescribingUnderstanding keys to successful implementation of electronic decision support in rural hospitals: analysis of a pilot study for antimicrobial prescribing2015-11-09T02:10:27Z<p>TheoBiblio: </p>
<hr />
<div><br />
<br />
== Background ==<br />
[[CDS|Clinical Decision Support]] (CDS) provides clinicians the opportunity to have [[evidence-based medicine]] information at the right moment to enhance medical decision making. Despite its potential in reducing medical errors, improve clinical outcomes and increase healthcare quality, CDSS are still not widely used by clinicians. Factors such as complexity, lack of adequate training and support as well as increase cost, are constantly cited by clinicians as barriers preventing CDSS implementation.<br />
<br />
== Introduction ==<br />
Antimicrobial agents constitute a major portion of hospital pharmacy expenditures, accounting for 20% to 50% of the total budget. Rural hospitals are specially in great disadvantage regarding CDSS implementations because of factors such as insufficient resources, limited clinical information systems as well as limited access to infectious disease physicians providing advice or assistance. Therefore, internet-based decision support tools offer to clinicians an option to provide adequate antimicrobial prescribing advice to those individuals in rural communities lacking access to other complex decision support systems.<ref name="Stevenson et al">Understanding Keys to Successful Implementation of Electronic Decision Support in Rural Hospitals: Analysis of a Pilot Study for Antimicrobial Prescribing http://ca3cx5qj7w.search.serialssolutions.com/OpenURL_local?sid=Entrez:PubMed&id=pmid:16280394</ref><br />
<br />
== Study design ==<br />
Pretest/Post-test<br />
<br />
== Methods ==<br />
A therapeutic clinical decision support system for the management of infectious diseases called "Antibiotic Assistant" was used during this study. Antibiotic assistant provides patient-specific antimicrobial recommendations based on factors such as co-morbid conditions, recommendations based on demographics characteristics, vitals signs and results of microbiology studies. Five rural hospitals from southwest Idaho were selected; selection was based on various factors such as involvement in a local rural health network as well as geographic proximity to the research team. Participants accessed an internet based platform during the study; this platform was developed and supported by the Centers for Medicare & Medicaid Services. Each prescribing clinician at the different hospitals was asked to introduce patients' data into the Internet-based decision support tool (antibiotic assistant) and to implement the recommendations when making therapeutic and dosing decisions. An antimicrobial management team (AMT) consisting of a nurse, pharmacist and infection control staff was formed in each hospital to prevent the under-utilization the decision support tool as well as to ensure that a clinician was aware of the CDSS recommendations during the first 24 hours of a patient's hospital admission.<br />
<br />
== Results ==<br />
First, physicians were reluctant to use the internet-based decision support tool because of perceived length of time required for log in and overall system run time. It was also reported that computers were not constantly located in patient care areas. Despite the formation of the Antimicrobial management teams to obtain CDSS recommendations in a timely fashion and provide that information to clinicians, transfer of information failed to occur in 3 of the participating hospitals. In another participating hospital, clinicians decided not to follow the CDSS recommendations most of the times; this was attributed to the mechanism of information communication. In the first hospital (A), clinicians were notified of recommendations in 32% of the cases in which the prescribed drugs differed from those recommended; it was noticed that antimicrobial orders were modified in 50% of those cases. In contrast, physicians in the second hospital (B) were notified of the recommendations by the AMT in 71% of the cases; However, they followed the recommendations in only 24%. It was noticed that the local hospital environment and community discouraged that member of the AMT (Non-physicians) questioned clinicians' decisions. <br />
<br />
== Conclusion ==<br />
Cultural factors represented a barrier for the implementation of electronic decision support tools. Although cultural factors have recently surged as possible barriers for the uptake of electronic decision support tools, this is not the first study to show that we may be overlooking cultural factors when developing and implementing ITS. <br />
<br />
== References ==<br />
<references/> <br />
<br />
[[Category: Reviews]]<br />
[[Category: Technologies]]<br />
[[Category: CDSS]]<br />
[[Category: HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Coronary_artery_disease_risk_assessment_from_unstructured_electronic_health_records_using_text_miningCoronary artery disease risk assessment from unstructured electronic health records using text mining2015-11-09T02:03:27Z<p>TheoBiblio: </p>
<hr />
<div>==First Review==<br />
This is a review of the article by Jonnagaddala et al., Coronary artery disease risk assessment from unstructured electronic health records using text mining.<br />
<br />
===Abstract=== <br />
Coronary artery disease (CAD) often leads to myocardial infarction, which may be fatal. Risk factors can be used to predict CAD, which may subsequently lead to prevention or early intervention. Patient data such as co-morbidities, medication history, social history and family history are required to determine the risk factors for a disease. However, risk factor data are usually embedded in unstructured clinical narratives if the data is not collected specifically for risk assessment purposes. Clinical text mining can be used to extract data related to risk factors from unstructured clinical notes. This study presents methods to extract Framingham risk factors from unstructured electronic health records using clinical text mining and to calculate 10-year coronary artery disease risk scores in a cohort of diabetic patients. We developed a rule-based system to extract risk factors: age, gender, total cholesterol, HDL-C, blood pressure, diabetes history and smoking history. The results showed that the output from the text mining system was reliable, but there was a significant amount of missing data to calculate the Framingham risk score. A systematic approach for understanding missing data was followed by implementation of imputation strategies. An analysis of the 10-year Framingham risk scores for coronary artery disease in this cohort has shown that the majority of the diabetic patients are at moderate risk of CAD<ref name="main_article">Jonnagaddala, J., Liaw, S.-T., Ray, P., Kumar, M., Chang, N.-W., & Dai, H.-J. (2015). Coronary artery disease risk assessment from unstructured electronic health records using text mining. Journal of Biomedical Informatics. http://doi.org/10.1016/j.jbi.2015.08.003</ref>.<br />
<br />
===Background and Purpose===<br />
The paper focuses on using text mining to extract the information necessary for calculating a patient’s risk for coronary artery disease (CAD) from unstructured notes in electronic health records. There are different models that are used to predict a patient’s risk for CAD but the most popular model is the Framingham risk score model (FRS). The factors FRS uses are age, gender, total cholesterol, or low-density lipoproteins cholesterol (LDL-C), high-density lipoproteins cholesterol (HDL-C), blood pressure (BP), diabetes history and smoking history. All of which can be found in unstructured text in electronic health records<ref name="main_article"></ref>.<br />
Available FRS calculators online require manual entry of the values for these factors using structured means. For a relatively low number of patients, this isn’t a problem but when dealing with a high number of patients, it can become an extremely overwhelming task. Since all of the necessary information is available in unstructured texts of an electronic health record, it makes sense to employ a system that uses text mining to extract the values and calculate a patient’s risk factor for CAD using the FRS model<ref name="main_article"></ref>.<br />
<br />
===Results===<br />
The system they developed was rule-based and the factors it extracted were used to calculate a 10 year FRS model CAD risk for patients.The CAD risk scores generated using the system was consistent with the scores calculated manually for 20 patients using the official 10-year CAD FRS worksheet<ref name="main_article"></ref>. A major issue they encountered had to do with missing information but they were able to mitigate this issue by applying appropriate imputation methods such as assigning the cohort mean value for factors without value<ref name="main_article"></ref>.<br />
<br />
Two reasons were identified that explained why some factors had missing values; a) the text mining system failed to recognize the risk factor data and (b) the risk factor data was not recorded <ref name="main_article"></ref>. The latter reason proved to be the reason for a majority of the factors missing values, after manual inspection <ref name="main_article"></ref>..<br />
<br />
== Second Review ==<br />
<br />
Add next review here.<br />
<br />
== References ==<br />
<references/><br />
<br />
[[Category: Reviews]]<br />
[[Category: EHR]]<br />
[[Category: HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Electronic_health_records_improve_the_quality_of_care_in_underserved_populations:_A_literature_reviewElectronic health records improve the quality of care in underserved populations: A literature review2015-11-01T07:42:31Z<p>TheoBiblio: /* Background and Purpose */</p>
<hr />
<div>== First Review ==<br />
<br />
This is a review of the article by Weinfeld et al., ''Electronic health records improve the quality of care in underserved populations: A literature review''. <br />
<br />
=== Abstract ===<br />
<br />
“Organizations in underserved settings are implementing or upgrading electronic health records (EHRs) in hopes of improving quality and meeting Federal goals for meaningful use of EHRs. However, much of the research that has been conducted on health information technology does not study use in underserved settings, or does not include EHRs. We conducted a structured literature search of [https://www.nlm.nih.gov/pubs/factsheets/medline.html MEDLINE] to find articles supporting the contention that EHRs improve quality in underserved settings. We found 17 articles published between 2003 and 2011. These articles were mostly in urban settings, and most study types were descriptive in nature. The articles provide evidence that EHRs can improve documentation, process measures, guideline-adherence, and (to a lesser extent) outcome measures. Providers and managers believed that EHRs would improve the quality and efficiency of care. The limited quantity and quality of evidence point to a need for ongoing research in this area"(p. 136,<ref name="Weinfeld 2012”> Weinfeld, J. M., Davidson, L. W., & Mohan, V. (2012). Electronic health records improve the quality of care in underserved populations: A literature review. Journal of Health Care for the Poor and Underserved, 23(3), 136-153. doi:10.1353/hpu.2012.0134. http://muse.jhu.edu/login?auth=0&type=summary&url=/journals/journal_of_health_care_for_the_poor_and_underserved/v023/23.3A.weinfeld.html</ref>).<br />
<br />
=== Background and Purpose ===<br />
<br />
The push for widespread adoption of EHR technology in the United States includes public health safety-net settings such as community health clinics and local public health departments. In addition to the often cited rationale for EHR implementation (improved safety and quality of care delivery) reduction in disparities of care are among the goals of '''[[health information technology]]''' (HIT) in the public health arena. The authors’ intention is to inform and enhance current and future efforts at EHR adoption in low resource settings that serve vulnerable populations.<br />
<br />
=== Discussion ===<br />
<br />
There is a lack of research that investigates how HIT impacts care in community ambulatory settings that target underserved populations. This is needed because it is not known if these technologies translate or have similar impacts on quality of care in these settings. Consider the clinical guidelines and reminders that are part of EHR clinical decision support. These may not be effective in low-resource settings such as a community clinic if that clinic does not have the resources to provide the intervention the EHR recommends. <br />
<br />
17 articles were reviewed. Quality improvement gains were realized through the use of various EHR features: templates, documentation, problem lists, and reminders. The general perception of clinical users was that quality of care was improved in underserved settings by the adoption of EHR technology. Several of the studies found substantial improvements in care outcome measures for safety-net practices using an EHR compared to those using paper methods. These improvements held for both Medicare patients and uninsured patients. Another article found large increases in the number of preventive procedures (diabetic foot exams, lipid level tests, HgA1c tests) performed after EHR implementation compared to baseline measures before implementation. Completeness of documentation was improved with the availability of an EHR as compared with paper systems. In one study, increased documentation of asthma severity and tracking of ED visits and hospitalizations drastically improved the use of asthma medications in urban children at a health center which added decision support to an existing EHR. Other articles provided evidence that clinical decision support also served to reduce the number of medications prescribed per patient and improve referral processes.<br />
<br />
=== Conclusion ===<br />
<br />
Despite the findings that show the usefulness of EHR technology in underserved settings, the quality of the evidence is weak and needs further study. Stakeholders involved in EHR implementation projects for underserved populations should focus on monitoring of cost, productivity and quality outcome measures to ensure benefits realization.<br />
<br />
=== Comments ===<br />
<br />
This article provides support for those involved in seeking funding and approval for EHR adoption for use in settings that serve patients who face barriers to healthcare access. Given the high cost and extensive resources required to implement EHR technology, it is challenging is to convince leadership that such HIT will have the same impact on quality of care delivery in these settings as is seen in private ambulatory settings.<br />
<br />
== Second Review ==<br />
<br />
Add next review here.<br />
<br />
== References ==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:EHR]]<br />
[[Category:CIS]]<br />
[[Category:Public Health]]<br />
[[Category:HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Electronic_health_records_improve_the_quality_of_care_in_underserved_populations:_A_literature_reviewElectronic health records improve the quality of care in underserved populations: A literature review2015-11-01T07:41:47Z<p>TheoBiblio: /* Background and Purpose */</p>
<hr />
<div>== First Review ==<br />
<br />
This is a review of the article by Weinfeld et al., ''Electronic health records improve the quality of care in underserved populations: A literature review''. <br />
<br />
=== Abstract ===<br />
<br />
“Organizations in underserved settings are implementing or upgrading electronic health records (EHRs) in hopes of improving quality and meeting Federal goals for meaningful use of EHRs. However, much of the research that has been conducted on health information technology does not study use in underserved settings, or does not include EHRs. We conducted a structured literature search of [https://www.nlm.nih.gov/pubs/factsheets/medline.html MEDLINE] to find articles supporting the contention that EHRs improve quality in underserved settings. We found 17 articles published between 2003 and 2011. These articles were mostly in urban settings, and most study types were descriptive in nature. The articles provide evidence that EHRs can improve documentation, process measures, guideline-adherence, and (to a lesser extent) outcome measures. Providers and managers believed that EHRs would improve the quality and efficiency of care. The limited quantity and quality of evidence point to a need for ongoing research in this area"(p. 136,<ref name="Weinfeld 2012”> Weinfeld, J. M., Davidson, L. W., & Mohan, V. (2012). Electronic health records improve the quality of care in underserved populations: A literature review. Journal of Health Care for the Poor and Underserved, 23(3), 136-153. doi:10.1353/hpu.2012.0134. http://muse.jhu.edu/login?auth=0&type=summary&url=/journals/journal_of_health_care_for_the_poor_and_underserved/v023/23.3A.weinfeld.html</ref>).<br />
<br />
=== Background and Purpose ===<br />
<br />
The push for widespread adoption of EHR technology in the United States includes public health safety-net settings such as community health clinics and local public health departments. In addition to the often cited rationale for EHR implementation (improved safety and quality of care delivery) reduction in disparities of care are among the goals of '''[[Health information technology]]''' (HIT) in the public health arena. The authors’ intention is to inform and enhance current and future efforts at EHR adoption in low resource settings that serve vulnerable populations.<br />
<br />
=== Discussion ===<br />
<br />
There is a lack of research that investigates how HIT impacts care in community ambulatory settings that target underserved populations. This is needed because it is not known if these technologies translate or have similar impacts on quality of care in these settings. Consider the clinical guidelines and reminders that are part of EHR clinical decision support. These may not be effective in low-resource settings such as a community clinic if that clinic does not have the resources to provide the intervention the EHR recommends. <br />
<br />
17 articles were reviewed. Quality improvement gains were realized through the use of various EHR features: templates, documentation, problem lists, and reminders. The general perception of clinical users was that quality of care was improved in underserved settings by the adoption of EHR technology. Several of the studies found substantial improvements in care outcome measures for safety-net practices using an EHR compared to those using paper methods. These improvements held for both Medicare patients and uninsured patients. Another article found large increases in the number of preventive procedures (diabetic foot exams, lipid level tests, HgA1c tests) performed after EHR implementation compared to baseline measures before implementation. Completeness of documentation was improved with the availability of an EHR as compared with paper systems. In one study, increased documentation of asthma severity and tracking of ED visits and hospitalizations drastically improved the use of asthma medications in urban children at a health center which added decision support to an existing EHR. Other articles provided evidence that clinical decision support also served to reduce the number of medications prescribed per patient and improve referral processes.<br />
<br />
=== Conclusion ===<br />
<br />
Despite the findings that show the usefulness of EHR technology in underserved settings, the quality of the evidence is weak and needs further study. Stakeholders involved in EHR implementation projects for underserved populations should focus on monitoring of cost, productivity and quality outcome measures to ensure benefits realization.<br />
<br />
=== Comments ===<br />
<br />
This article provides support for those involved in seeking funding and approval for EHR adoption for use in settings that serve patients who face barriers to healthcare access. Given the high cost and extensive resources required to implement EHR technology, it is challenging is to convince leadership that such HIT will have the same impact on quality of care delivery in these settings as is seen in private ambulatory settings.<br />
<br />
== Second Review ==<br />
<br />
Add next review here.<br />
<br />
== References ==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:EHR]]<br />
[[Category:CIS]]<br />
[[Category:Public Health]]<br />
[[Category:HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Electronic_health_records_improve_the_quality_of_care_in_underserved_populations:_A_literature_reviewElectronic health records improve the quality of care in underserved populations: A literature review2015-11-01T07:39:00Z<p>TheoBiblio: /* Background and Purpose */</p>
<hr />
<div>== First Review ==<br />
<br />
This is a review of the article by Weinfeld et al., ''Electronic health records improve the quality of care in underserved populations: A literature review''. <br />
<br />
=== Abstract ===<br />
<br />
“Organizations in underserved settings are implementing or upgrading electronic health records (EHRs) in hopes of improving quality and meeting Federal goals for meaningful use of EHRs. However, much of the research that has been conducted on health information technology does not study use in underserved settings, or does not include EHRs. We conducted a structured literature search of [https://www.nlm.nih.gov/pubs/factsheets/medline.html MEDLINE] to find articles supporting the contention that EHRs improve quality in underserved settings. We found 17 articles published between 2003 and 2011. These articles were mostly in urban settings, and most study types were descriptive in nature. The articles provide evidence that EHRs can improve documentation, process measures, guideline-adherence, and (to a lesser extent) outcome measures. Providers and managers believed that EHRs would improve the quality and efficiency of care. The limited quantity and quality of evidence point to a need for ongoing research in this area"(p. 136,<ref name="Weinfeld 2012”> Weinfeld, J. M., Davidson, L. W., & Mohan, V. (2012). Electronic health records improve the quality of care in underserved populations: A literature review. Journal of Health Care for the Poor and Underserved, 23(3), 136-153. doi:10.1353/hpu.2012.0134. http://muse.jhu.edu/login?auth=0&type=summary&url=/journals/journal_of_health_care_for_the_poor_and_underserved/v023/23.3A.weinfeld.html</ref>).<br />
<br />
=== Background and Purpose ===<br />
<br />
The push for widespread adoption of EHR technology in the United States includes public health safety-net settings such as community health clinics and local public health departments. In addition to the often cited rationale for EHR implementation (improved safety and quality of care delivery) reduction in disparities of care are among the goals of '''[[health information technology]]''' (HIT) in the public health arena. The authors’ intention is to inform and enhance current and future efforts at EHR adoption in low resource settings that serve vulnerable populations.<br />
<br />
=== Discussion ===<br />
<br />
There is a lack of research that investigates how HIT impacts care in community ambulatory settings that target underserved populations. This is needed because it is not known if these technologies translate or have similar impacts on quality of care in these settings. Consider the clinical guidelines and reminders that are part of EHR clinical decision support. These may not be effective in low-resource settings such as a community clinic if that clinic does not have the resources to provide the intervention the EHR recommends. <br />
<br />
17 articles were reviewed. Quality improvement gains were realized through the use of various EHR features: templates, documentation, problem lists, and reminders. The general perception of clinical users was that quality of care was improved in underserved settings by the adoption of EHR technology. Several of the studies found substantial improvements in care outcome measures for safety-net practices using an EHR compared to those using paper methods. These improvements held for both Medicare patients and uninsured patients. Another article found large increases in the number of preventive procedures (diabetic foot exams, lipid level tests, HgA1c tests) performed after EHR implementation compared to baseline measures before implementation. Completeness of documentation was improved with the availability of an EHR as compared with paper systems. In one study, increased documentation of asthma severity and tracking of ED visits and hospitalizations drastically improved the use of asthma medications in urban children at a health center which added decision support to an existing EHR. Other articles provided evidence that clinical decision support also served to reduce the number of medications prescribed per patient and improve referral processes.<br />
<br />
=== Conclusion ===<br />
<br />
Despite the findings that show the usefulness of EHR technology in underserved settings, the quality of the evidence is weak and needs further study. Stakeholders involved in EHR implementation projects for underserved populations should focus on monitoring of cost, productivity and quality outcome measures to ensure benefits realization.<br />
<br />
=== Comments ===<br />
<br />
This article provides support for those involved in seeking funding and approval for EHR adoption for use in settings that serve patients who face barriers to healthcare access. Given the high cost and extensive resources required to implement EHR technology, it is challenging is to convince leadership that such HIT will have the same impact on quality of care delivery in these settings as is seen in private ambulatory settings.<br />
<br />
== Second Review ==<br />
<br />
Add next review here.<br />
<br />
== References ==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:EHR]]<br />
[[Category:CIS]]<br />
[[Category:Public Health]]<br />
[[Category:HI5313-2015-FALL]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T03:32:42Z<p>TheoBiblio: /* Comments */</p>
<hr />
<div>The problem of prescription opiod abuse is well-known and wide-spread, but clinicians often make references to patient opiod abuse in unstructured free-text notes rather than structured, searchable discrete data fields. The article being reviewed here studies the use of [[Natural language processing (NLP)]] to help identify the patients whose problem might otherwise go unrecognized. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
As the authors note, the use of [[Natural language processing (NLP)]] alone to identify a problem like opioid abuse is neither reasonable nor desirable. With the increasing transparency of electronic health record ([[EHR]]) documentation and the transmission of these records across a wide network of health information exchanges ([[HIE]]s), errors resulting from purely software-based identification of these and other problems present too high a risk to appropriate patient care. But a hybrid of NLP-assisted manual validation can significantly reduce the time and errors of omission of manual validation alone. The application of the hybrid described in this article holds considerable promise for a wide-range of difficult-to-detect patient health problems.<br />
<br />
==Related Articles==<br />
[[Evaluating healthcare quality using natural language processing]]<br />
<br />
[[Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms]]<br />
<br />
[[Use of a support vector machine for categorizing free-text notes]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T03:10:57Z<p>TheoBiblio: /* Related Articles */</p>
<hr />
<div>The problem of prescription opiod abuse is well-known and wide-spread, but clinicians often make references to patient opiod abuse in unstructured free-text notes rather than structured, searchable discrete data fields. The article being reviewed here studies the use of [[Natural language processing (NLP)]] to help identify the patients whose problem might otherwise go unrecognized. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Evaluating healthcare quality using natural language processing]]<br />
<br />
[[Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms]]<br />
<br />
[[Use of a support vector machine for categorizing free-text notes]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T03:10:42Z<p>TheoBiblio: /* Related Articles */</p>
<hr />
<div>The problem of prescription opiod abuse is well-known and wide-spread, but clinicians often make references to patient opiod abuse in unstructured free-text notes rather than structured, searchable discrete data fields. The article being reviewed here studies the use of [[Natural language processing (NLP)]] to help identify the patients whose problem might otherwise go unrecognized. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Evaluating healthcare quality using natural language processing]] <br />
[[Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms]]<br />
[[Use of a support vector machine for categorizing free-text notes]]<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T03:09:44Z<p>TheoBiblio: </p>
<hr />
<div>The problem of prescription opiod abuse is well-known and wide-spread, but clinicians often make references to patient opiod abuse in unstructured free-text notes rather than structured, searchable discrete data fields. The article being reviewed here studies the use of [[Natural language processing (NLP)]] to help identify the patients whose problem might otherwise go unrecognized. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T02:57:16Z<p>TheoBiblio: </p>
<hr />
<div>This is a review of the article "Using natural language processing to identify problem usage of prescription opioids" by David S. Carrella, et al in ''International Journal of Medical Informatics''. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T02:56:12Z<p>TheoBiblio: </p>
<hr />
<div>This is a review of the article "Using natural language processing to identify problem usage of prescription opioids" by David S. Carrella, David Cronkitea, Roy E. Palmerb, Kathleen Saundersa, David E. Grossb, Elizabeth T. Mastersc, Timothy R. Hylanb, and Michael Von Korffa in the International Journal of Medical Informatics. <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T02:50:17Z<p>TheoBiblio: /* Methods */</p>
<hr />
<div>IN PROCESS <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
The authors used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. They developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. The authors applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T02:49:35Z<p>TheoBiblio: /* Objective */</p>
<hr />
<div>IN PROCESS <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
The authors developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
We used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. We developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. We applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Use_of_a_support_vector_machine_for_categorizing_free-text_notesUse of a support vector machine for categorizing free-text notes2015-11-01T02:48:38Z<p>TheoBiblio: </p>
<hr />
<div>The use of [[Natural language processing (NLP)]] to extract discrete data from free-text documentation in an[[ EMR| Electronic Health Record (EHR)]] is one of the most challenging and rewarding fields of study in the healthcare informatics community. In an article entitled "Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions" published in the Journal of the American Medical Informatics Association (JAMIA) Sep-Oct 2013, the authors describe the use of NLP to identify EHR Progress Notes which pertain to diabetes.<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref><br />
<br />
==Abstract<ref name="Wright2013">Wright A, McCoy AB, Henkin S, Kale A, Sittig DF. Use of a support vector machine for categorizing free-text notes: assessment of accuracy across two institutions. Journal of the American Medical Informatics Association : JAMIA. 2013;20(5):887-890. doi:10.1136/amiajnl-2012-001576.</ref>==<br />
===Background===<br />
Electronic health record (EHR) users must regularly review large amounts of data in order to make informed clinical decisions, and such review is time-consuming and often overwhelming. Technologies like automated summarization tools, EHR search engines and natural language processing have been shown to help clinicians manage this information.<br />
<br />
===Objective===<br />
To develop a [https://en.wikipedia.org/wiki/Support_vector_machine support vector machine (SVM)] -based system for identifying EHR progress notes pertaining to diabetes, and to validate it at two institutions.<br />
<br />
===Materials and Methods===<br />
We retrieved 2000 EHR progress notes from patients with diabetes at the Brigham and Women's Hospital (1000 for training and 1000 for testing) and another 1000 notes from the University of Texas Physicians (for validation). We manually annotated all notes and trained a SVM using a bag of words approach. We then used the SVM on the testing and validation sets and evaluated its performance with the area under the curve (AUC) and F statistics.<br />
<br />
===Results===<br />
The model accurately identified diabetes-related notes in both the Brigham and Women's Hospital testing set (AUC=0.956, F=0.934) and the external University of Texas Faculty Physicians validation set (AUC=0.947, F=0.935).<br />
<br />
===Discussion===<br />
Overall, the model we developed was quite accurate. Furthermore, it generalized, without loss of accuracy, to another institution with a different EHR and a distinct patient and provider population.<br />
<br />
===Conclusion===<br />
It is possible to use a SVM-based classifier to identify EHR progress notes pertaining to diabetes, and the model generalizes well.<br />
<br />
==Comments==<br />
My interest in this article stems from my ongoing work in an acute care hospital system to identify patients who fall into specific disease-process classifications in order to activate '''Clinical Decision Support ([[CDS]])''' tools that provide both evidence-based practice options to healthcare providers as well as capturing documentation required for regulatory reporting purposes.<br />
<br />
Using a fairly straight-forward NLP program, the authors were able to identify Progress Notes pertaining to diabetes with significant accuracy, which they solidly demonstrated using a validation group. Their method involved keyword matches, which the authors noted as creating limitations related to Progress Notes which may have involved diabetic patients but did not contain keywords. <br />
<br />
This approach could provide some level of the desired identification of patients in the acute care setting; however, a much more comprehensive review of the entire medical record (labs, radiology reports, orders) which not only looked for key words but also for specific parameters (vital signs and labs outside of specified ranges, procedures performed in a certain amount of time, medications ordered) increases the complexity of the NLP required, especially as negation must also be taken into account.<br />
<br />
Of note, the high degree of success in this study associated with a relatively simple approach did give me hope that some complex NLP challenges may actually be much less difficult to overcome given the right viewpoint and approach.<br />
<br />
==Related Resources==<br />
<br />
Joachims, T. (1998). Text categorization with support vector machines: Learning with many relevant features (pp. 137-142). Springer Berlin Heidelberg.<br />
<br />
[[Using natural language processing to identify problem usage of prescription opioids]]<br />
<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:Natural language processing (NLP)]]<br />
[[Category:New Technology]]<br />
[[Category:EHR]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Using_natural_language_processing_to_identify_problem_usage_of_prescription_opioidsUsing natural language processing to identify problem usage of prescription opioids2015-11-01T02:32:30Z<p>TheoBiblio: Created page with "IN PROCESS <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inf..."</p>
<hr />
<div>IN PROCESS <ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref><br />
<br />
==Abstract<ref name="Carrell2015">Carrell DS, Cronkite D, Palmer RE, et al. Using natural language processing to identify problem usage of prescription opioids. Int J Med Inform. 2015;84(12):1057-64. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/26456569</ref>==<br />
===Background===<br />
Accurate and scalable surveillance methods are critical to understand widespread problems associated with misuse and abuse of prescription opioids and for implementing effective prevention and control measures. Traditional diagnostic coding incompletely documents problem use. Relevant information for each patient is often obscured in vast amounts of clinical text.<br />
<br />
===Objective===<br />
We developed and evaluated a method that combines natural language processing (NLP) and computer-assisted manual review of clinical notes to identify evidence of problem opioid use in electronic health records (EHRs).<br />
<br />
===Methods===<br />
We used the EHR data and text of 22,142 patients receiving chronic opioid therapy (≥70 days' supply of opioids per calendar quarter) during 2006-2012 to develop and evaluate an NLP-based surveillance method and compare it to traditional methods based on International Classification of Disease, Ninth Edition (ICD-9) codes. We developed a 1288-term dictionary for clinician mentions of opioid addiction, abuse, misuse or overuse, and an NLP system to identify these mentions in unstructured text. The system distinguished affirmative mentions from those that were negated or otherwise qualified. We applied this system to 7336,445 electronic chart notes of the 22,142 patients. Trained abstractors using a custom computer-assisted software interface manually reviewed 7751 chart notes (from 3156 patients) selected by the NLP system and classified each note as to whether or not it contained textual evidence of problem opioid use.<br />
<br />
===Results===<br />
Traditional diagnostic codes for problem opioid use were found for 2240 (10.1%) patients. NLP-assisted manual review identified an additional 728 (3.1%) patients with evidence of clinically diagnosed problem opioid use in clinical notes. Inter-rater reliability among pairs of abstractors reviewing notes was high, with kappa=0.86 and 97% agreement for one pair, and kappa=0.71 and 88% agreement for another pair.<br />
<br />
===Conclusion===<br />
Scalable, semi-automated NLP methods can efficiently and accurately identify evidence of problem opioid use in vast amounts of EHR text. Incorporating such methods into surveillance efforts may increase prevalence estimates by as much as one-third relative to traditional methods.<br />
<br />
==Comments==<br />
IN PROCESS<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Can_Utilizing_a_Computerized_Provider_Order_Entry_(CPOE)_System_Prevent_Hospital_Medical_Errors_and_Adverse_Drug_EventsCan Utilizing a Computerized Provider Order Entry (CPOE) System Prevent Hospital Medical Errors and Adverse Drug Events2015-11-01T01:07:14Z<p>TheoBiblio: </p>
<hr />
<div>==Introduction==<br />
<br />
The objective of the research was to examine the benefits of and barriers to computerized physician order entry ([[CPOE]]) adoption in hospitals, to determine the effects on medical errors and [[adverse drug event]] (ADEs) as well as to examine the costs and savings associated with the implementation of CPOE. This study used a systemic review and referenced 50 sources.<br />
<br />
==Methods==<br />
The literature review and review of case studies was performed in January to May 2013 and September 2013 to March 2014. Electronic databases were searched for the terms “CPOE” OR “Computerized Physician Order Entry” OR “Electronic Prescribing” AND “Medical Errors” OR "adverse drug events [[ADE]]” OR “Adoption” OR “Implementation” AND "[[Meaningful use]]” OR “HITECH”, [http://www.ahrq.gov/ Agency for Healthcare Research and Quality], Health Affairs, and [[CMS]]. <ref name= "Charles"> Charles, K., Cannon, M., Hall, R., & Coustasse, A. (Fall 2014). Can Utilizing a Conputerized Provider Order Entry (CPOE) System Prevent Hospital Medical Errors and Adverse Drug Effects? Perspectives in Health Information Management, 1-16 </ref>After analysis 154 references were found and 51 citations were used for the study. The results were structured in groups that described the benefits of and barriers to implementation and adoption of CPOE systems.<br />
<br />
==Results==<br />
Because preventable medical errors and ADEs have increased from 98,000 reported cases in 2000 to 210,000 cases in 2013, it is a patient safety imperative for healthcare providers to implement utilization of CPOE systems. A 2012 study estimated that utilizing a CPOE system could potentially reduce medical errors by as much as 48 percent. <br />
<ref>http://www.ihi.org/resources/Pages/Publications/ReductionMedicationErrorsinHospitalsAdoptionCPOE.aspx</ref> Other benefits identified in using CPOE included:<br />
* Increase in the accessibility of the patient’s medical records<br />
* The ability for a physician to work off-site and still have access to information regarding patient past visits<br />
* Reduction in prescription ordering by the physicians<br />
* Increased in coordination of care<br />
<br />
Some of the barriers associated with CPOE implementation include system interoperability, faulty programming, system crashes and the main problem, cost. <ref name= "Charles"> </ref name='Charles" Charles, K., Cannon, M., Hall, R., & Coustasse, A. (Fall 2014). Can Utilizing a Conputerized Provider Order Entry (CPOE) System Prevent Hospital Medical Errors and Adverse Drug Effects? Perspectives in Health Information Management, 1-16 </ref> A 2005 study showed implementation costs ranged from $1.3 - $2.1 million for critical access hospitals, approximately $2.0 million for rural referral hospitals, and $1.9 - $4.4 million for urban hospitals. Many small hospitals simple cannot afford an EHR system. Thirty percent of small hospitals (less than 100 beds) and 28 percent of rural hospitals have adopted CPOE, compared to 56 percent of large hospitals (more than 400 beds) and 53 percent of teaching hospitals with more than 20 residents.<br />
<br />
==Conclusion==<br />
CPOE implementation can help reduce medical errors and ADEs, create hospital cost savings, offer providers additional clinical knowledge, provide timely patient-specific information and offer a level of convenience for order entry.<br />
<br />
==Comments==<br />
The studies suggest CPOE can significantly reduce the frequency of medication errors in hospital but it is unclear whether this translates into reduced harm for patients.<br />
<br />
==References==<br />
<references /><br />
<br />
<br />
==Related Article==<br />
[[Computerized Physician Order Entry-realted Medication Errors: Analysis of Reported Errors and Vulnerability Testing of Current Systems]]<br />
<br />
[[Factors contributing to an increase in duplicate medication order errors after CPOE implementation]]<br />
<br />
[[Category: Reviews]]<br />
[[Category: HI5313-2015-FALL]]<br />
[[Category: Adverse drug event]]<br />
[[Category: CPOE]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Development_and_Implementation_of_an_Electronic_Health_Record_Generated_Surgical_Handoff_and_Rounding_ToolDevelopment and Implementation of an Electronic Health Record Generated Surgical Handoff and Rounding Tool2015-11-01T00:59:55Z<p>TheoBiblio: /* Results */</p>
<hr />
<div>This is a review of the article ''Development and Implementation of and Electronic Health Record Generated Surgical Handoff and Rounding Tool'' from the ''Journal of Medical Systems''. <ref name ="Raval"> Raval, M., Rust, L., Thakkar, R., Kurtovic, K., Nwomeh, B., Besner, G., Kenney, B., Development and Implementation of an Electronic Health Record Generated Surgical Handoff and Rounding Tool. Journal of Medical Systems 39 (8), 2015.http://www.ncbi.nlm.nih.gov/pubmed/?term=Development+and+Implementation+of+an+Electronic+Health+Record+Generated+Surgical+Handoff+and+Rounding+Tool </ref><br />
<br />
==Background==<br />
It has been found that many hospital errors are a result of inadequate communication. A large pediatric hospital in Ohio, conducted a study to determine if there was a change in safety, accuracy, and efficiency following a departmental change of utilizing a Microsoft Access Database (MAD) for hand-off reports, to a hand-off report automatically generated by EPIC [[electronic health record|electronic health record]].<br />
<br />
==Methods==<br />
The researchers utilized a qualitative study method to survey clinicians regarding both types of handoff methods. The were asked to evaluate their perceived efficiency of the methods, the accuracy, and to rate the list compared to the methods used in different specialty areas. The researchers also used the hospital databases to determine any changes in adverse events following the change.<br />
<br />
==Results==<br />
The researcher found that 19% of the MAD records contained inaccuracies, while none of the EPIC records had any inaccuracies. The surveys also showed that clinicians reduced time managing the lists by 43 minutes per week; while at the same time, the clinicians found the EPIC system to generally safer, more efficient, and overall good to very good.<br />
<br />
==Conclusion==<br />
While the results are limited, the overall success of this system was apparent. With these results, the researchers concluded that perceived benefits to patient safety, and overall accuracy coupled with advancements in mobile technology will be the way of the future.<br />
<br />
== Article Analysis ==<br />
Communication is essential for providing quality care to all patients, and with increased technology and electronic health records clinician require new innovative methods to improve communication while still ensuring safety to all the patients. By developing a method to “capture” all the pertinent data from an electronic health record and creating a quick, understandable, reference to be used when transitioning a patient from one clinician to another, informaticians can facilitate a smooth, efficient, accurate, and safe transition. Many EHRs already contain the information that clinicians need, the key is to develop a way to have it readily accessible. This study clearly demonstrated that despite limitations, and some negative aspects (more pages printed) that patient safety and clinician evaluation can be improved by incorporating EHRs into handoffs.<br />
<br />
<br />
== References ==<br />
<br />
<References/><br />
<br />
[[Category: Reviews ]]<br />
[[Category: EHR ]]<br />
[[Category: CIS]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[ Category: Perioperative Informatics]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Visualizing_unstructured_patient_data_for_assessing_diagnostic_and_therapeutic_historyVisualizing unstructured patient data for assessing diagnostic and therapeutic history2015-10-25T05:28:28Z<p>TheoBiblio: /* Related Articles */</p>
<hr />
<div>Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, the authors evaluated an approach to visualize information extracted from clinical documents by means of tag cloud. <ref name="Deng2014">Deng Y, Denecke K. Visualizing unstructured patient data for assessing diagnostic and therapeutic history. Stud Health Technol Inform. 2014;205:1158-62. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/25160371.</ref><br />
<br />
==Abstract<ref name="Deng2014">Deng Y, Denecke K. Visualizing unstructured patient data for assessing diagnostic and therapeutic history. Stud Health Technol Inform. 2014;205:1158-62. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/25160371.</ref>==<br />
===Background===<br />
Information is stored in an electronic health record in various data types, many of which are unstructured. As a result, clinicians may find it difficult to gain a holistic impression of a patient's current medical condition when data collected over the course of many years includes a significant amount of lab work, many diagnostic exams, numerous procedures, and even hospitalizations. Review of this data can require a significant amount of time if no substantive overview of all of the available data is present.<br />
<br />
===Objective===<br />
In this work, the authors address the question of representing the current status of a patient that is described in form of unstructured text to enable physicians to get an overview quickly. The authors introduce an approach that visualizes patient information extracted from the documents of the EHR in an easy understandable manner, namely by means of tag cloud visualization. Tag cloud is a common visualization method in the Web 2.0 community. Studies showed that they enhance the perception of (Web) documents [6] and they support an explorative search when it is difficult to specify a concrete query. These benefits perfectly address the challenges of accessing and monitoring patient data documented in unstructured documents. In this context, using tag cloud for information visualization is still relatively unexplored.<br />
<br />
===Study Design and Method===<br />
For the study, tags are selected from a text (1) just by frequency (Bag of words), or (2) based on their part of speech (POS). A first type of tags (Bag of words) is generated using all the words of a document except stop words. All the tags are rendered with same color. <br />
<br />
To generate the second type of tags, the tokens of a text are annotated with their part of speech labels. The lexical category of the remaining words of a document is after removal of stop words. The authors intuitively highlight the nouns, verbs, and adjectives with three primary colors (red, yellow and blue) due to the decisive roles of these lexical categories in the meaning delivery. Their applicability and correctness of the authors' intuition from meaning representation need to be analyzed in experiments.<br />
<br />
In order to figure out the effectiveness of the tag clouds, a user study has been performed in the neurosurgical department of a university hospital. Three residents were asked to assess the tag clouds generated for different texts and to judge whether (1) the tag clouds are useful to get an overview on the patient status, (2) shown words are relevant and (3) visualization of relevant aspects is clear. Six medical reports in German (three surgical operation reports, two pathological reports and one radiological report) were used to generate the tag clouds. <br />
<br />
Before starting to answer the questions, the physicians were introduced to imagine the following simulated task scenario:<br />
<br />
In the outpatient department, facing a patient who has never seen before, but he/she was treated in the hospital already, you have the complete patient documentation in computer. You are expected to grasp the basic status of this patient and start the treatment in very short period of time.<br />
<br />
Based on this scenario, all six medical reports visualized by the two types of tag cloud (generated through bag of words approach or POS tagger) were judged on a rating scale from 1 (bad) to 5 (excellent).<br />
<br />
===Results===<br />
Since the medical narratives have shown significant differences in format and writing styles, we decided to perform a comparison based on different data sources, namely, operation notes, pathological reports and radiological reports. In order to obtain the arithmetic mean for one narrative type, the sum of judgments’ means (three evaluators) is divided by the amount of documents in this narrative type. <br />
<br />
The pathological report has achieved the best scores in the first question (Usefulness), while the operation note reaches the highest mean value by the second question referring to the tag relevance. For the question three, the visualization of relevant aspects, the tag clouds have also provided moderate representation of details to the physicians, although only entity extraction approaches were applied to generate the clouds without considering the semantic meanings of terms. <br />
<br />
Generally, evaluators stated to have obtained an overview on the reports through the tag clouds, but desired to see some more details on the medical condition of a patient. <br />
<br />
For all three questions, different variances and deviations in the perceptions of the physicians have been analyzed. The main reason is their varying work experiences and knowledge background. <br />
<br />
===Conclusion===<br />
In conclusion, already with simple methods, perception of relevant aspects reported in clinical documents can be supported. Since the approach exploits only tokenization, part of speech tagging and stop word removal, texts of different languages can be easily processed using that method. <br />
<br />
Several improvements are possible and would increase user satisfaction. In future, we will study whether tags generated using concept extraction will be a better option to identify most important aspects of a clinical document. Symptoms, diseases and anatomical concepts would be the most interesting information in the medical records and concepts belonging to those categories could be used for tag cloud generation. Moreover, the semantic relations between the concepts will also be presented to the users. Through clustering of tags, the potential topics will be detected and visualized.<br />
<br />
==Comments==<br />
Working in the acute care setting, I definitely see the application of the concepts associated with this article. Adequate summations of large amounts of clinically relevant yet unstructured data continue to frustrate and impede smooth provider workflow and timely clinical decision making. As new and increasing sources of healthcare information make their way into the electronic health record, it is imperative that studies such as this one find ways to teach the computer how to better fit into natural provider workflow rather than interrupting provider workflow in order to serve the EHR.<br />
<br />
==Related Articles==<br />
[[Optimization of drug–drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard]]<br />
<br />
[[Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Visualizing_unstructured_patient_data_for_assessing_diagnostic_and_therapeutic_historyVisualizing unstructured patient data for assessing diagnostic and therapeutic history2015-10-25T05:28:17Z<p>TheoBiblio: /* Related Articles */</p>
<hr />
<div>Having access to relevant patient data is crucial for clinical decision making. The data is often documented in unstructured texts and collected in the electronic health record. In this paper, the authors evaluated an approach to visualize information extracted from clinical documents by means of tag cloud. <ref name="Deng2014">Deng Y, Denecke K. Visualizing unstructured patient data for assessing diagnostic and therapeutic history. Stud Health Technol Inform. 2014;205:1158-62. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/25160371.</ref><br />
<br />
==Abstract<ref name="Deng2014">Deng Y, Denecke K. Visualizing unstructured patient data for assessing diagnostic and therapeutic history. Stud Health Technol Inform. 2014;205:1158-62. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/25160371.</ref>==<br />
===Background===<br />
Information is stored in an electronic health record in various data types, many of which are unstructured. As a result, clinicians may find it difficult to gain a holistic impression of a patient's current medical condition when data collected over the course of many years includes a significant amount of lab work, many diagnostic exams, numerous procedures, and even hospitalizations. Review of this data can require a significant amount of time if no substantive overview of all of the available data is present.<br />
<br />
===Objective===<br />
In this work, the authors address the question of representing the current status of a patient that is described in form of unstructured text to enable physicians to get an overview quickly. The authors introduce an approach that visualizes patient information extracted from the documents of the EHR in an easy understandable manner, namely by means of tag cloud visualization. Tag cloud is a common visualization method in the Web 2.0 community. Studies showed that they enhance the perception of (Web) documents [6] and they support an explorative search when it is difficult to specify a concrete query. These benefits perfectly address the challenges of accessing and monitoring patient data documented in unstructured documents. In this context, using tag cloud for information visualization is still relatively unexplored.<br />
<br />
===Study Design and Method===<br />
For the study, tags are selected from a text (1) just by frequency (Bag of words), or (2) based on their part of speech (POS). A first type of tags (Bag of words) is generated using all the words of a document except stop words. All the tags are rendered with same color. <br />
<br />
To generate the second type of tags, the tokens of a text are annotated with their part of speech labels. The lexical category of the remaining words of a document is after removal of stop words. The authors intuitively highlight the nouns, verbs, and adjectives with three primary colors (red, yellow and blue) due to the decisive roles of these lexical categories in the meaning delivery. Their applicability and correctness of the authors' intuition from meaning representation need to be analyzed in experiments.<br />
<br />
In order to figure out the effectiveness of the tag clouds, a user study has been performed in the neurosurgical department of a university hospital. Three residents were asked to assess the tag clouds generated for different texts and to judge whether (1) the tag clouds are useful to get an overview on the patient status, (2) shown words are relevant and (3) visualization of relevant aspects is clear. Six medical reports in German (three surgical operation reports, two pathological reports and one radiological report) were used to generate the tag clouds. <br />
<br />
Before starting to answer the questions, the physicians were introduced to imagine the following simulated task scenario:<br />
<br />
In the outpatient department, facing a patient who has never seen before, but he/she was treated in the hospital already, you have the complete patient documentation in computer. You are expected to grasp the basic status of this patient and start the treatment in very short period of time.<br />
<br />
Based on this scenario, all six medical reports visualized by the two types of tag cloud (generated through bag of words approach or POS tagger) were judged on a rating scale from 1 (bad) to 5 (excellent).<br />
<br />
===Results===<br />
Since the medical narratives have shown significant differences in format and writing styles, we decided to perform a comparison based on different data sources, namely, operation notes, pathological reports and radiological reports. In order to obtain the arithmetic mean for one narrative type, the sum of judgments’ means (three evaluators) is divided by the amount of documents in this narrative type. <br />
<br />
The pathological report has achieved the best scores in the first question (Usefulness), while the operation note reaches the highest mean value by the second question referring to the tag relevance. For the question three, the visualization of relevant aspects, the tag clouds have also provided moderate representation of details to the physicians, although only entity extraction approaches were applied to generate the clouds without considering the semantic meanings of terms. <br />
<br />
Generally, evaluators stated to have obtained an overview on the reports through the tag clouds, but desired to see some more details on the medical condition of a patient. <br />
<br />
For all three questions, different variances and deviations in the perceptions of the physicians have been analyzed. The main reason is their varying work experiences and knowledge background. <br />
<br />
===Conclusion===<br />
In conclusion, already with simple methods, perception of relevant aspects reported in clinical documents can be supported. Since the approach exploits only tokenization, part of speech tagging and stop word removal, texts of different languages can be easily processed using that method. <br />
<br />
Several improvements are possible and would increase user satisfaction. In future, we will study whether tags generated using concept extraction will be a better option to identify most important aspects of a clinical document. Symptoms, diseases and anatomical concepts would be the most interesting information in the medical records and concepts belonging to those categories could be used for tag cloud generation. Moreover, the semantic relations between the concepts will also be presented to the users. Through clustering of tags, the potential topics will be detected and visualized.<br />
<br />
==Comments==<br />
Working in the acute care setting, I definitely see the application of the concepts associated with this article. Adequate summations of large amounts of clinically relevant yet unstructured data continue to frustrate and impede smooth provider workflow and timely clinical decision making. As new and increasing sources of healthcare information make their way into the electronic health record, it is imperative that studies such as this one find ways to teach the computer how to better fit into natural provider workflow rather than interrupting provider workflow in order to serve the EHR.<br />
<br />
==Related Articles==<br />
[[Optimization of drug–drug interaction alert rules in a pediatric hospital's electronic health record system using a visual analytics dashboard]]<br />
[[Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T05:26:49Z<p>TheoBiblio: /* Comments */</p>
<hr />
<div>With the increase in longevity of patients comes the difficulty for physicians of managing increasing numbers of chronic illnesses over a long period of time. Classifying patients affected by multiple illnesses by means of [[natural language processing (NLP)]] can enhance the decision support of medical doctors. There are two challenges to overcome in order to define a system capable of correctly classifying the multiple illnesses that may affect a chronically ill patient: (a) dealing with irregular multivariate time series; and (b) dealing with the interaction of multiple co-morbidities in a heterogeneous population of patients. In this article, the authors describe a method by which multi-label classification on health records of chronically ill patients may be performed. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
Multi-label classification, kernels, locality preserving projections and multi-class Fisher discriminant analysis are combined in a system for classification of multi-label chronically ill patients.<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. The authors' main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. The second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.<br />
<br />
==Comments==<br />
The use of [[natural language processing (NLP)]] to aid physicians in classifying patient conditions and expediting clinical decision support ([[CDS]]) tools is an extremely important enterprise. The plethora of laboratory tests and diagnostic exams provide increasing opportunities to both identify the condition and severity of specific illnesses, but the ability of physicians to both recognize and treat those conditions is limited by their ability to sift this information out of the mountain of data present in a patient's electronic health record ([[EHR]]). With methods such as the one described in this article, the potential for meaningful classifications to arise from this data for use in clinical decision support provides an exciting opportunity for physicians to move to more intuitive approach to diagnosing and treating chronic conditions.<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:50:53Z<p>TheoBiblio: </p>
<hr />
<div>With the increase in longevity of patients comes the difficulty for physicians of managing increasing numbers of chronic illnesses over a long period of time. Classifying patients affected by multiple illnesses by means of [[natural language processing (NLP)]] can enhance the decision support of medical doctors. There are two challenges to overcome in order to define a system capable of correctly classifying the multiple illnesses that may affect a chronically ill patient: (a) dealing with irregular multivariate time series; and (b) dealing with the interaction of multiple co-morbidities in a heterogeneous population of patients. In this article, the authors describe a method by which multi-label classification on health records of chronically ill patients may be performed. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
Multi-label classification, kernels, locality preserving projections and multi-class Fisher discriminant analysis are combined in a system for classification of multi-label chronically ill patients.<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. The authors' main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. The second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:24:31Z<p>TheoBiblio: /* Objective */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
Multi-label classification, kernels, locality preserving projections and multi-class Fisher discriminant analysis are combined in a system for classification of multi-label chronically ill patients.<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. The authors' main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. The second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:23:48Z<p>TheoBiblio: /* Background */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
Multi-label classification, kernels, locality preserving projections and multi-class Fisher discriminant analysis are combined in a system for classification of multi-label chronically ill patients.<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:12:38Z<p>TheoBiblio: /* Conclusion */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
The evaluation highlights the feasibility of representing medical health records using the BoW for multi-label classification tasks. The study also highlights that dimensionality reduction algorithms based on kernel methods, locality preserving projections or both are good candidates to deal with multi-label classification tasks in medical time series with many missing values and high label density.<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:12:21Z<p>TheoBiblio: /* Results */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
Non-linear dimensionality reduction approaches behave well on medical time series quantized using the BoW algorithm, with results comparable to state-of-the-art multi-label classification algorithms. Chaining the projected features has a positive impact on the performance of the algorithm with respect to pure binary relevance approaches.<br />
<br />
===Conclusion===<br />
<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:12:07Z<p>TheoBiblio: /* Study Design and Method */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
We combine BoW and supervised dimensionality reduction algorithms to perform multi-label classification on health records of chronically ill patients. The considered algorithms are compared with state-of-the-art multi-label classifiers in two real world datasets. Portavita dataset contains 525 diabetes type 2 (DT2) patients, with co-morbidities of DT2 such as hypertension, dyslipidemia, and microvascular or macrovascular issues. MIMIC II dataset contains 2635 patients affected by thyroid disease, diabetes mellitus, lipoid metabolism disease, fluid electrolyte disease, hypertensive disease, thrombosis, hypotension, chronic obstructive pulmonary disease (COPD), liver disease and kidney disease. The algorithms are evaluated using multi-label evaluation metrics such as hamming loss, one error, coverage, ranking loss, and average precision.<br />
<br />
===Results===<br />
<br />
<br />
===Conclusion===<br />
<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:11:43Z<p>TheoBiblio: /* Objective */</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
<br />
<br />
===Objective===<br />
This research is motivated by the issue of classifying illnesses of chronically ill patients for decision support in clinical settings. Our main objective is to propose multi-label classification of multivariate time series contained in medical records of chronically ill patients, by means of quantization methods, such as bag of words (BoW), and multi-label classification algorithms. Our second objective is to compare supervised dimensionality reduction techniques to state-of-the-art multi-label classification algorithms. The hypothesis is that kernel methods and locality preserving projections make such algorithms good candidates to study multi-label medical time series.<br />
<br />
===Study Design and Method===<br />
<br />
<br />
===Results===<br />
<br />
<br />
===Conclusion===<br />
<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Multi-label_classification_of_chronically_ill_patients_with_bag_of_words_and_supervised_dimensionality_reduction_algorithmsMulti-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms2015-10-25T04:09:09Z<p>TheoBiblio: Created page with "IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised d..."</p>
<hr />
<div>IN PROCESS. <ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref><br />
<br />
==Abstract<ref name="Bromuri2014">Bromuri S, Zufferey D, Hennebert J, Schumacher M. Multi-label classification of chronically ill patients with bag of words and supervised dimensionality reduction algorithms. J Biomed Inform. 2014;51:165-75. http://www-ncbi-nlm-nih-gov.ezproxyhost.library.tmc.edu/pubmed/24879897</ref>==<br />
===Background===<br />
<br />
<br />
===Objective===<br />
<br />
<br />
===Study Design and Method===<br />
<br />
<br />
===Results===<br />
<br />
<br />
===Conclusion===<br />
<br />
<br />
==Comments==<br />
<br />
<br />
==Related Articles==<br />
[[Visualizing unstructured patient data for assessing diagnostic and therapeutic history]]<br />
<br />
==References==<br />
<references/><br />
<br />
[[Category:Reviews]]<br />
[[Category:HI5313-2015-FALL]]<br />
[[Category:CDS]]<br />
[[Category:EHR]]<br />
[[Category:Natural language processing (NLP)]]</div>TheoBibliohttp://www.clinfowiki.org/wiki/index.php/Computerised_provider_order_entry_combined_with_clinical_decision_support_systems_to_improve_medication_safetyComputerised provider order entry combined with clinical decision support systems to improve medication safety2015-10-25T02:59:05Z<p>TheoBiblio: </p>
<hr />
<div>This is a review of the article "Computerised provider order entry combined with clinical decision support systems to improve medication safety" by Ranji et al.<br />
<br />
==Background==<br />
[[Adverse drug event|Adverse drug events (ADEs)]] are a major cause of morbidity and mortality in hospitalized and ambulatory patients <ref>Sumant R Ranji, Stephanie Rennke, Robert M Wachter.''Computerised provider order entry combined with clinical decision support systems to improve medication safety''.BMJ Qual Saf 2014;23:9 773-780 Published Online First: 12 April 2014 doi:10.1136/bmjqs-2013-002165[http://qualitysafety.bmj.com.ezproxyhost.library.tmc.edu/content/23/9/773.long]</ref>. It is expected that (CPOE) combined with (CDSS) will reduce the morbidity and mortality rate in a clinical setting. <br />
<br />
==Methods==<br />
The author conducted a systematic review of literature containing the combination of CPOE and CDSS in a clinical setting. The author used the Agency for Healthcare Research and Quality (AHRQ). The articles measured cost, unintended consequences or specific implementations specific to CPOE and CDSS. These were gathered from both random and non-randomized control studies.<br />
<br />
==Results==<br />
Included in the results were as follows.<br />
5 systematic reviews<br />
Once narrative review <br />
Two controlled trials.<br />
In these trials it was shown that CPOE and CDSS combined may in fact reduce ADEs. Out of the 10 studies 9 where within the inpatient setting. It is to note that 5 shown a large indication of a positive influence on the reduction of ADE’s<br />
<br />
==Conclusion==<br />
To conclude CPOE and CDSS do provide data suggesting that through the proper use of them the reduction of ADE’s may occur. However, this analysis of this matter suggest as many others do that this topic still needs to be explored more.<br />
<br />
==Comments==<br />
Through the proper use of technology we can reduce ADE’s and ultimately reduce the morbidity and mortality in a clinical setting. However, I agree that this topic should be covered more and actual studies should be conducted with more clinical studies set up and not simply reviewed. As the more data collected will provide further information and continue to confirm the general consciences.<br />
<br />
==References==<br />
<references/><br />
[[category:Reviews]]<br />
[[category:HI5313-2015-FALL]]<br />
[[category:CPOE]]<br />
[[category:CDSS]]</div>TheoBiblio