Advertisement

Find answers, ask questions, and connect with our community around the world.

  • AI in Clinicial Practice

    Posted by missstitem missstitem on March 18, 2023 at 12:51 am

    Just wanted to get a feel for what’s out there. Went to RSNA, and while there are some interesting applications, I get the sense that most of it is hype at ATM. Hard to get a sense of how good these models actually are – especially if a lot of the demos curate the most optimal cases.
     
    How have your guys’ experience with AI in clinical practice been? Any models that actually help reduce reading time or improve quality?

    poymd25 replied 1 year, 3 months ago 9 Members · 10 Replies
  • 10 Replies
  • lisa.kipp_631

    Member
    March 18, 2023 at 2:12 pm

    RAPID – Meh. Helpful for large supratentorial infarcts which Id be pissed if my first year resident missed. I have seen it sometimes call smaller infarcts with the perfusion that I would have missed. Has very significant issues with giving false results if theres mivement/artifact. Its useless for posterior fossa. Their LVO AI is obviously a cheap algorithm I could produce in an afternoon. Cant recognize that the vessels are less apparent because the other hemisphere contains a large bullet.

    Neuroquant – sucks and gives unreliable answers. Feels great when it agrees with me but very frustrating when it makes me argue with the epileptologist that wants to do a wrong sided hippocampectomy. If you watch their training videos they encourage diagnosis based on volume trends of a few percentile. This would be malpractice, it is clearly not accurate enough for that. Conversely lesionquant follow up results seem to be very reproducible.

    • fborzi_840

      Member
      March 18, 2023 at 2:27 pm

      We trialed aidoc for intracranial hemorrhage and PE. For ICH it was highly sensitive but a lot of false positives. It was good at picking up thin subdurals which could easily be missed. Was probably a little better for PE. Kind of nice to use as a decider on boarderline cases for smaller PEs. 

      • amotter

        Member
        March 18, 2023 at 2:34 pm

        Rad AI – generates impressions. It is actually really good.

      • gmail.com

        Member
        March 18, 2023 at 2:48 pm

        What do you think is the outcome for those thin SDHs – 1-3mm thick?  Almost always no mass effect.  I know the protocol is to watch the patient, frequent neuro checks, and f/u CT in 6hrs or so.  Seems like for these isolated thin SDHs, that they usually just get a f/u CT and if stable then discharge.  Thoughts?
         

        Quote from NeuroBro

        We trialed aidoc for intracranial hemorrhage and PE. For ICH it was highly sensitive but a lot of false positives. It was good at picking up thin subdurals which could easily be missed. Was probably a little better for PE. Kind of nice to use as a decider on boarderline cases for smaller PEs. 

        • consuldreugenio

          Member
          March 18, 2023 at 4:59 pm

          Some patients missed subdurals are discharged and dont get timely follow up. Especially if the mechanism of trauma was very low level or unknown. If the patient is on anticoagulation, the subdural hematoma can enlarge quite quickly and lead to poor outcomes from delayed management. Not uncommon.

          • missstitem missstitem

            Member
            March 19, 2023 at 1:01 pm

            Thanks all.
             
            Saw Rad AI at RSNA, and was quite impressed. How often do you have to edit the impression? and do you feel that it’s concise?
             
            Anyone else have any experience with pulmonary nodule or CXR models?

            • Unknown Member

              Deleted User
              March 19, 2023 at 1:12 pm

              I find the pulmonary nodule CAD for oncology patients really helpful; and it can hardly be called AI. I know some hate it, and I’m not sure why.
               
              I see all sorts of opportunities for helping rads. Subdurals, fractures, nodules etc etc. Especially as reads are becoming lightening fast.
               
              Unfortunately, I believe it will iteratively lead to our replacement in some venues, as we check off automated reports for routine exams, given many will eventually not even review for accuracy. Still, if used appropriately, will decrease misses, which is good for patients. 
               
               

              • missstitem missstitem

                Member
                March 19, 2023 at 7:50 pm

                May I ask what algorithm you guys use at your practice? Is it Siemen’s, ClearRead, etc?

                • jonhanse_770

                  Member
                  March 20, 2023 at 5:56 am

                  All AI has its pros and cons. Needless to say it wont take the place of rads any time soon but will be a good adjunct and help as the need for speed and volume of studies needing to be read increases. While AI can catch things a rad might miss it also might flag a false positive which leads to a study taking longer to read so basically its a wash. That said it’s still worth a closer look at for sure.
                   
                  As for RadAI when I first saw them several years back I thought they would take the world by storm. It’s a great easy to use product that can save the rads time reading and that’s not just marketing fluff.  Their marketing can definitely use some help and sadly they get clumped in with other AI products when they really aren’t what many consider medical imaging AI (even though they work well with AI systems.). Love the product though.
                   
                  PACSMan   

                  • poymd25

                    Member
                    March 20, 2023 at 6:52 am

                    Used a few. All good, and help improve accuracy and/or reduce report times. AI-Rad companion, Rad AI, one or two others that evade recall