-
AI matches radiologist performance on mammo, DBT exams
Posted by Nieves1953 on May 24, 2023 at 10:14 pmWe’ll talk later when the AI is willing to take on the liability of reading them.
satyanar replied 1 year, 4 months ago 10 Members · 13 Replies -
13 Replies
-
wonderhow long before we actually start to see this tech effect mammo work load/ practice?
say the tech was completely ready today. wouldn’t there at least be a few years of validating data with human oversight before letting the AI system function as a true standalone
i think either way its becoming increasingly clear that radiology in its current form won’t really be a thing 10-15 years from now-
Can you believe they are actually trying to INCREASE radiology trainee spots? Complete heads in the sand situation.
-
This is certainly not the time to be a mammographer. I think for the next 20 years, it will pay best to have a broadly diversified skill set. When much of your income is dependent on a handful of CPT codes, you are very vulnerable!
-
These studies have essentially zero external validity. Clearly they should be comparing radiologist vs radiologist PLUS AI vs AI. Compare the ROC AUC for these. Who designs this stuff?
-
This is just FUD, AI will be a Rads best friend. Pareto rule, even in the scariest scenario, AI will just help rads get faster by taking over the 80% grunt stuff, leaving the challenging 20% for human eyes.
-
Totally agree. But if it increases efficiency by 80% then won’t we need less rads?
-
We are in the middle of a major AI hype cycle.
Yes, this is big and transformative technology, but it is not really intelligence as we humans think of it as it has no judgement.
AI is likely to be a tool for humans and human Rads which will improve our jobs and lives.
-
I’m starting to get more curious about what ChatGPT is actually doing. It certainly impresses me.
Here’s a long article from Wolfram– I’ve just started it, but basically it seems to be saying ChatGPT/LLMs are “just adding one word at a time” using an amazingly large # of statistical weights based on the massive amount of training.
I’m in no position to say that’s not correct, but the way it answers specific questions, incorporates new questions/info in a conversation, etc, is quite impressive (although of course there are ways to make it sound dumb).
I’m trying to understand how we go from “just adding one word at a time” to the output I’m seeing.
So, basically, how does it incorporate the questions you ask it (including follow-up questions) into the choices for “next words”? Because that seems to be the magic.
[link=https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/]https://writings.stephenw…-and-why-does-it-work/[/link]-
and just to extrapolate wildly, the threat from AI to us is the same as the threat of the steam engine etc to draft horses:
A horse is limited by biology to producing, on average across a day, about 1 horsepower (much more, 15 or so, for brief periods). Yes it runs on hay and will eat apples out of your hand, but once we had engines that could produce much more power etc, the era of the horse as a source of work was pretty much over.
Our brain is limited to something like 10-20 Watts. Yes it’s amazingly efficient, we don’t fully understand it, I have no idea what consciousness really is, etc etc. No one is saying a 10 Watt computer can compete with a human. But, to the degree that we can build machines that duplicate whatever is going on in our brains, they won’t be limited by our biology. They can use a megawatt of power, or a gigawatt, or whatever.
So, *if* we ever build machines that truly have general intelligence, whatever exactly that is, they will very quickly surpass us.
I’m not sure we understand general intelligence/consciousness/etc well enough to know if that’s likely to happen, but I get the concern. -
It encodes the content of the question into a very high dimensional vector that embeds the meaning of the input sequence, and then uses that context to generate a sequence of output words for its response.
There are other models that do this more explicitly (eg encoder decoder networks), but most deep neural networks are doing this to some extent.
An easier to visualize version can be seen with the olde Word2Vec model which shows how words can be embedded in a high dimensional vector space to do simple operations like SAT analogy questions.
These LLM models are encoding the content of the entire input phrase and updating with each output word, not just a single word, but harder to visualize what the model is actually doing.
-
-
AI still needs a lot of work as it or rather some versions & applications can be prone to the, I [i](the AI) [/i]just made stuff up, reality.
Trust, but verify
[link=https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/]https://arstechnica.com/tech-policy/2023/05/lawyer-cited-6-fake-cases-made-up-by-chatgpt-judge-calls-it-unprecedented/[/link] -
Do you actual knowledge to add to a discussion among radiologists Frumi? That would be much appreciated and a huge improvement.
-
-
-
-
-
-
-
-
-