|
Post by bridgetfrpas on Aug 25, 2016 6:44:07 GMT -4
The Neurolex Diagnostics speech analyzer seems like an important piece in moving forward in mental health treatments. The idea that a primary care physician (PCP) could use this analyzer with all of their patients makes it seem like there would be a potential for these physicians to detect changes earlier. PCP’s have the unique advantage of seeing patients over the long term so using the speech analyzer seems to be beneficial in that you could see changes over time and potentially catch a schizophrenic diagnoses early which would be incredible beneficial. Earlier diagnoses and medical treatment could result in more stable, independent patients. I, along with the author of the paper, find doubts in the accuracy of the speech analyzer especially when it comes to other cultures and genders. The author interviews a doctor that talks about how unless the analyzer was really accurate than it shouldn’t be used and I agree with this statement. But, as humans, we are still able to detect the tone changes, word changes, and word phrase lengths so couldn’t we just use the device as an add on? I’m not sure I ever think we should fully rely on AI technology to diagnose because we, as humans, have “gut feelings” and an intuition that can never be replicated. I think in time this speech analyzer could be beneficial to the medical field but for now it seems like it has to go through some more serious trials before being put on the market.
BF PA-S
|
|
|
Post by Katarzyna Goryl on Aug 25, 2016 10:24:42 GMT -4
Wow, like for real. I guess its a good tool to confirm a diagnosis for a physician but if a robot is doing all the work whats the point of going to see a doctor anymore just buy the robot and do it yourself. With all the hackers and scam artists out there I'm pretty sure you could find a robot for half off on the web if you looked hard enough. So this system just takes away the MD from a doctor and replaces it with IT totally not cool. The amount of schooling an MD does over an IT individual is incomparable. I believe machines are just the lazy way to medicine yes it does has its pros as being cleaner and more efficient but we are trained to have a sympathetic and caring approach. Plus what happens when the machine loses its marbles and starts misdiagnosing people. Some people out there still trust doctors more than computers so from a patients perspective the machine might not make it. Today's society is too focused on technology how about spend millions of dollars on other more pressing areas like Zika and Aids. Machines to confirm a diagnosis should be only used as a hobby or kitchen tools but should be kept out of medicine. Im not saying all machines though because the CT and MRI are very important but they help more than they harm and its not programmable is mostly mechanical. So I think its a no go with diagnosing patients its not fair to the patient if a machine diagnosis you psychologically.
|
|
Barbara Goryl (MS3)
Guest
|
Post by Barbara Goryl (MS3) on Aug 25, 2016 10:59:13 GMT -4
The world of technology is exiting, in that, it brings new advancements into medicine and psychiatry. This article is a perfect depiction of where medicine is heading in this century. Its wonderful that a piece of technology would be aiding in proper diagnosis. Many patients are misdiagnosed on a daily basis. Just like in the article it took 10 years for a proper diagnosis and treatment plan administered to his brother. That is to long. If we can prevent such events from happening with artificial intelligence then why not try it.However, I do a see a problem in that there are many people with different backgrounds and speech patterns. That is a major concern because healthy people would suddenly be diagnosed with mental disabilities. Until the machine isn't equipped to to decifer other language and speech patterns, it should not be used as a definitive means of diagnosing people.
|
|
|
Post by elshaddaitesfaye on Aug 25, 2016 11:35:02 GMT -4
As the world has grown and transformed with technology, it only makes sense that the same would happen in the field of medicine. This article discusses how artificial intelligence can be used to detect verbal cues that can point towards the diagnose of schizophrenia, bipolar disorder, and depression. As exciting as it may seem for the possibility of using this technology to screen for these disorders is, many concerns come to mind. It could be of concern for people who naturally respond in short, abrupt sentences. Also, with artificial intelligence there is room for error as many people speak in different manners whether it be the tone in their voice, their dialect, or even the fact that they may not be an English speaker. Another concern is issues with privacy and voice recording without the consent of the patient. If the patient is made aware of the software being used it could possibly alter the way they would naturally speak or behave. Although artificial intelligence may be able to quantify findings for a more objective assessment, I do believe that psychiatry will always be a field where human-to-human interaction is key.
|
|
Parthener Pinder (MS3)
Guest
|
Post by Parthener Pinder (MS3) on Aug 25, 2016 11:56:24 GMT -4
The field of medicine is constantly changing and technology is becoming a huge part of it. As explained in the article, NeuroLex Diagnostics, a technologic company that uses specific linguistics to order to screen for disease risk factors, has turned its focus towards schizophrenia. This company is determined to build an efficient and effective tool to assist primary care physicians in screening for schizophrenia. Early detection of schizophrenia and intervention are critical, especially since schizophrenia can be considered a degenerative mental illness in which every psychotic break the individual has can possibly cause more brain damage and lead the individual to markedly deteriorate over time. I think by combining the knowledge of a primary care physician with the technology from NeuroLex, overtime, not only the physicians but individuals who are battling with mental illness on a daily basis will benefit from it as well. The sooner an individual is screened and diagnosed with schizophrenia, the better the outcome for that individual.
|
|
|
Post by Emily Keys MS3 on Aug 25, 2016 12:00:07 GMT -4
Advancements in technology are proving to be very useful in the vast field of medicine. As with most technological innovations, there comes skepticism. The doubt surrounding using technology to diagnose patients can be diminished through extensive research and numerous studies. I believe this is the case concerning Artificial Intelligence to correctly identify mental health illnesses. The article points out AI may interpret different speech patterns in various cultures incorrectly. Also adding to this, what if a patient is totally nonverbal and only uses gestures to communicate? AI would be useless in this scenario. I believe another challenge for using AI is the fact that a mental health illness can present slightly different in one patient to the next. While the idea of using AI seems fascinating, there is still something about human interaction that prevails.
EK MS3
|
|
|
Post by Aaron Boren MS4 on Aug 25, 2016 12:44:12 GMT -4
Machines have increased our ability to diagnose and treat many diseases and improve quality of life. I think researching artificial intelligence as a means of further advancing those abilities, as well as to fill in the void of provider shortage and decrease human error is a great endeavor. This would be an incredible asset to the mental health field if the obstacles presented in the article can be overcome and it goes on to prove a reliable form of assessment. I have to admit I am a bit skeptical because applying the algorithm across a wide variety of cultures as stated in the article seems to be an extremely complicated issue to overcome. With that being said, I still have hope that the technology we are capable of developing will contribute to the field of mental health as it does in other fields of medicine.
|
|
|
Post by arifhussainmd3 on Aug 25, 2016 14:09:52 GMT -4
This was a very interesting article. Getting computers to diagnose patients, though might be very tempting, is a very risky venture. The article talked about the difference in demographics, culture, and subpopulation within the population in a region could offset diagnosis to great detriment. I think it would be a great tool to work along side the clinician to confirm the diagnosis he/she makes but not the other way around. When dealing with human subjects (patients), it is very critical to have a hundred percent fool-proof or close to that. This is the very reason, the automated self-driven cars haven't made it to the markets because there is a very narrow margin for mistake. That is why I don't think this a viable option especially in psychiatry which relies on observation, demeanor and evaluation of patient by psychiatrist. There is so much more to just the phrases and tones of the language in aiding the diagnosis. Though it is a great idea in other disciplines of medicine such as internal medicine, cardiology, pulmonology, etc. to use automation in laboratory testing to help make diagnosis, it is not going to be a great help in psychiatry.
|
|
Aniruddha Gollapalli MS3
Guest
|
Post by Aniruddha Gollapalli MS3 on Aug 25, 2016 16:01:16 GMT -4
As with all other fields, the advent of computers in medicine is expected and even welcomed in most instances. As mentioned in the article, artificial intelligence could facilitate psychiatric treatment in numerous ways. However, there are possible shortcomings with this technology. Firstly, even though it's an age old argument, the truth is that artificial intelligence will always lack the human capacity to understand subtlety. It is often said that psychiatry is as much an art as it is a science. The psychiatrist needs to be able make judgement calls when diagnosing patients. The doctor needs to be able to read the patient's expressions and body language as well as their speech patterns. Secondly, dependence on technology can diminish our own abilities. For example, since cashiers and calculators have been introduced most individuals have very poor mental math abilities. It would be detrimental to the profession if psychiatrists went down the same road. Finally, I don't believe that the technology is quite where it needs to be for this experiment to be a true success. Speech recognition technology has terrible and sometimes comical results and to trust the nascent technology with something as critical as a psychiatric diagnosis does not seem prudent.
|
|
|
Post by Anthony Moon MS3 on Aug 25, 2016 16:57:58 GMT -4
The idea of Artificial Intelligence has been a dream of scientists and fiction authors since the 19th century when Mary Shelley's Frankenstein brought forth the idea of creating an artificial being that had the capacity of rationale, thought and emotion of a human being. In the modern age, Artificial intelligence is not a question of the imagination but a question of when. The idea of using artificial intelligence in order to enrich and better human lives has been a long standing goal. The purpose of AI is to create a machine that can follow human thought patterns and accomplish a goal. That goal could be anything from medical diagnostics, self driving cars to military applications. But once again, we must ask ourselves - what right do we have in order to create such a machine. Human intelligence is not just a collection of collated data that we can pull at a moments notice and recite, memories are tied to emotions, human insight and feel. How we interpret such memories or facts are tied to our emotions. Are we that desperate for automation and ease to give up that what makes us human? Forgo emotion and feel for a correct answer? Become nothing more than flesh and blood that can merely read a computer screen? As future medical professionals, are we that scared of being wrong, that we are willing to become nothing more than a conduit for a database? A machine that can catch on such subtle nuances of repeated "this," "That" and "a." Do we live in such an Orwellian world, that future physicians can secretly record and track our use of words in order to diagnose? Fear is a powerful motivator and inspiration to create. The atomic bomb, Turing Machine or even antibiotics were created out of fear - fear of death, inaccurate information, or disease. However we must never be scared of being wrong, yes in the medical realm - being wrong can mean the difference between life and death, however to be wrong is to be human. AI would be a powerful tool in our arsenal, but we must never be scared of being that what makes us human. Alexander Pope once said "to err is human, to forgive divine" Is ease worth the lose of that what makes us human? AM (MS3)
|
|
|
Post by Paul Mtonga (MS3) on Aug 25, 2016 17:54:45 GMT -4
As exemplified in the article, Artificial Intelligence has an unimaginable potential through its ability to diagnose psychiatric disorders (and disease in general) faster, offer novel treatment options and eliminate human error. These time saving measures mean more efficiency and reduced costs which would be beneficial for the patients as well as healthcare providers. Artificial Intelligence is expected to be a helpful tool and there is no doubt that sophisticated learning and advanced AI algorithms will find a place in medicine and healthcare over the coming years. Rather than replace doctors, it should advance their capabilities and cover their blind spots.
|
|
|
Post by Camy Dearmin (MS4) on Aug 25, 2016 19:42:51 GMT -4
A few months ago I had the distinct privilege of spending several hours with a mentor of mine--a seasoned clinician and long-time educator who is now embarking on a new phase in his career, as part of a venture investigating a potential risk prediction model that, like the 'Schizophrenia Screener,' could very possibly transform an aspect of mental health care. On that day, and again today as I contemplate this aspiring innovation, I can't help but sense a recurring (and very exciting) theme.
Just think: We can program a computer to detect certain semantic and syntactic elements in a way that might identify early-as-possible the likelihood that someone is (or soon will be) experiencing a psychotic episode. Well, I find that thrilling!
And amidst the backdrop of recent and much-needed recognition of the global importance of mental disorders (which the WHO estimates account for no less than 13% of the global burden of disease; and 5 of the 10 leading causes of disability in certain regions of the world), it's emerging technologies such as this that could not only transform the application of psychiatry, but once-for-all put mental health firmly on the international health agenda.
Not surprisingly, this engineer-turned-CEO Jim Schwoebel is young; very young. And equally unsurprisingly, he hails from our own backyard at Georgia Tech. How encouraged I am to see the APA not only applaud him, but support his machine learning start-up.
|
|
Anirudh Lingamaneni
Guest
|
Post by Anirudh Lingamaneni on Aug 25, 2016 19:50:33 GMT -4
With most psychiatric condition being diagnosed only through clinical presentation, this type of technology offer a quantifiable variable that clinicians can use to accurate track a patients progress. Sometime patients may not accurate portray how well the medication is working or how well they feel and it those kinda of situations NeuroLex could provide to be extremely useful. I believe that technology like NeuroLex can significantly decrease the time in which a patient has a accurate diagnosis. Also because of the small details that this AI monitors it can help find the correct medication regiment for the patient faster. While technological advancements like Neurolex have the potential to help people around the world, I believe there is a long way to go before these kinda of AI are more commonly used.
|
|
|
Post by N on Aug 25, 2016 23:48:38 GMT -4
The attempt for artificial intelligence to enter psychiatric offices seems to go against the fundamental basis for psychiatric assessment exams. In DSM4, it's said that speech is a manifestation of thought, and consideration is taken when a physician can take speech and dissect it, by looking for speech patterns that are secondary to thought disorders. An example would be the manifestation of certain speech patterns such as echolalia, clanging, pressured speech,circumstantial speech, and "word salad”which is unintelligible speech of no meaning thus making the content of their speech of little use to a listener. However recognizing speech patterns instead of the actual words being used can provide great benefit to a physician. The software would need to take into account for the vast variation and patterns of speech.Moreover, the idea of having a computer tell you a quantitative number builds on the flaws of DSMs use of objective testing during you initial psych exam. Often when speaking to a patient, we use our basic doctor patient relationship where we face the patient, listen, facilitate and ask questions to generate a targeted response as well as watch for mood changes(i.e.mood congruent, ext). Unless this AI has a camera built it to assess this, then the Dr. patient relationship is lost and as a result, medication compliance could be in jeopardy. In theory it sounds great, however with the overlap seen in mental health disorders, listening for certain words is a small part of a initial psych exam, and using it to make a diagnosis wouldn’t be reliable.
|
|
|
Post by veronique on Aug 26, 2016 9:12:13 GMT -4
We are certainly in an era where technology has advanced exponentially, we see great benefits, such as reduced production time, increase in efficiency and efficacy and reduced human errors. Pertaining to various aspects of medicine and also mental health care, its true that small errors could be catastrophic. Medication given to patients that would be greatly influenced by diagnosis determined by a machine may be worrisome to many. It is important to note when dealing with mental health care, artificial intelligence can be a huge asset but should not take over from human interaction with patients.Let us be careful not to dismiss the value of physician patient relationship and also the importance of therapy and counselling. This risk becomes more disturbing when considering age, gender, ethnicity, race, or region because variations.If an AI is trained on speech samples that are all from one demographic group, normal samples outside that group might result in false positives. AI should be used as an aid in making diagnosis along with the physician's expertise, this combination could definitely be of great help.
|
|