Advertisement

Find answers, ask questions, and connect with our community around the world.

  • Cloud Hosted PACS versus On Site Solution

    Posted by sydnifreakingford_577 on August 10, 2020 at 5:51 am

    I know the trend these days revolves around the move to cloud hosted PACS systems and the cost savings (either real or perceived) by the C-suite.  My question is more towards the end users who have experience in both architectures versus sales reps who have an agenda.  Do you see a noticeable decrease in image viewing with a cloud hosted option?  I realize the connection speeds from the modality to the cloud and then the cloud back to the viewing stations are not the same as with everything is on the same LAN so am wondering if there is enough performance difference to cause issues for the heavy viewers such as Radiologist and ER clinicians.  Thanks for any input you can provide.  

    Unknown Member replied 3 years, 4 months ago 15 Members · 17 Replies
  • 17 Replies
  • cbkent

    Member
    August 10, 2020 at 7:06 am

    We attempted to host our mammo/tomo on a cloud (well known vendor).  The load times for 1-2GB data sets were unreasonable for the rads.  At present we have the PACS local for performance and resiliency 

    • jonhanse_770

      Member
      August 11, 2020 at 12:26 pm

      You have to have a solid pipeline to the cloud and at least 100 Mb/sec to do this. Do the reports locally, send it up to the cloud, download scheduled priors the night before from the cloud to the server so there is no delay in getting them. If you are doing tomo 400 MB is minimum with 1 GB prefererred but a 1 GB WAN if/where available will kill you financially,
       
      PACSMan
       
       

      • edsonandrade

        Member
        September 10, 2020 at 6:40 am

        I think having pre-fetch and local cache servers onsite can make a big difference solving cloud issues. 

        • SN242012000

          Member
          September 14, 2020 at 1:35 pm

          If you keep trying to implement legacy PACS in the public cloud you’ll keep failing and will require workarounds as others have commented. But you don’t need to do that. Break the paradigm and implement a cloud-engineered PACS that doesn’t have those legacy restrictions and limitations. When you choose a proper cloud-engineered solution, you won’t have those problems. How can you be sure? Try before you buy – Demand a pilot with your preferred solution, over your own network, with your own data, integrated to your own systems. Then finalize your decision.
           

      • Unknown Member

        Deleted User
        November 16, 2020 at 10:50 am

        You are absolutely correct about the internets speed to and from the cloud, especially when dealing with large data sets such as tomo. One thing I noticed in the original post was he talked about going from the “modality to the cloud”. My hope is that he meant from the modality to some sort of PACS relay workstation that can apply algorithms to expedite the transmission to the cloud. Of course, every vendor has their own take on how this is done and I can only speak on behalf of Infinitt. You also mentioned precaching relative priors and if your provider can do that it is essential, especially with those big mammo sets!

        • michele.mcguire_299

          Member
          November 23, 2020 at 2:05 pm

          Many of the old guard PACS are trying to put lipstick on a pig and it wont work and that is 
          I agree with Brad there are many vendors like his that have developed a Cloud based PACS and they work.
          If you can stream a 50G HD movie with your kid plays Call of Duty .. your internet is not the issue.. it is the application your running

          • julie.young_645

            Member
            November 29, 2020 at 10:08 am

            All of this is at least partial vindication/confirmation of what I’ve been saying for years and years…
             
            [link=https://doctordalai.blogspot.com/2010/10/dalais-laws-of-pacs-revisedthe-ranzcr.html]https://doctordalai.blogs…revisedthe-ranzcr.html[/link]

            • ruszja

              Member
              November 30, 2020 at 6:00 am

              We deal with a few different hospitals, one of them has ‘cloud everything’. Its clunky at best. I am sitting 10ft from the mammo unit yet gigabytes of data have to be moved to California and back before I can pull them up on my workstation. Makes no sense. 
               
              Did I mention the day a contractor dug up the fiber one block down from the hospital ? Good times.
               
              Yes, absolutely, cloud is the way to go for an off-site ‘warm’ backup and to enable remote access. For the day-day operation its an impediment.

              • sofia89amaya_857

                Member
                December 1, 2020 at 4:20 pm

                FW,
                Most modern/native cloud solutions deploy edge-devices (AKA DICOM Gateways/Routers) at the imaging acquisition sites. In the early days these were scaled down versions of the primary software application that were tasked with simply compressing, encrypting and persistently sending images to the cloud. But as Moore’s law prevailed hardware got significantly more robust and cheaper, all while the software got more nimble and refined, to the point where a full copy of the DB, the main application and lots of storage could all coexist on the edge-device(s). This by defacto created a ‘Hybrid’ implementation where a complete copy of the PACS software could be dotted around a distributed network to provide local access to the images for instant access for reading, reporting, study management, etc. for those working on the LAN, while simultaneously transmitting the images up to the cloud allowed those working remote over the WAN/Internet to easily access as if they were on the LAN. The DICOM metadata is measured in KB’s so it all transmits up to the cloud quickly so everyone can log on to a central worklist that has knowledge of the studies from all locations in near real-time while the images make their way depending on upload speeds, which are often asynchronous (most bandwidth is provisioned with faster download speeds than upload, but this is changing rapidly).
                 
                These edge devices take the latency out of viewing images locally which is absolutely a prerequisite when dealing with Tomo (which more often than not have multiple priors) or Echo’s, etc. Sprinkle in image streaming, rules-based pre-fetching and pre-caching of images to the workstations in advance of clinicians or techs viewing and you can really fine-tune things to give a very rich experience to the end user.  
                 
                As this thread points out, there are a handful of companies out there that were architected this way from the very beginning and thus have a leg up on the old line players who at best can try and hang their legacy software in a datacenter, connect sites via VPN’s (which offer no compression, persistent sending, queue management, local access to images, etc.) and call it a ‘cloud.’ 
                 
                The more mature companies will have made the leap to a true multi-tenant architecture where there is a single instance of the software running on public cloud (think Microsoft Azure) with security parameters managing the segregation of the data. They will have cool features like bi-directional secure image sharing with dual factor authentication built within as this is what the cloud allows for. The software application layer and all the data will be replicated in real-time across multiple redundant facilities taking advantage of block storage, so not only is your data safe, but equally if not more important you are getting true business continuance at the cloud level as well as locally. If you lose internet just log directly into the edge-device as it’s a full copy of the software so you can do everything locally until the connection is restored. Yeah, you won’t be able to access images from remote sites but that would be the case with any system so no losing anything there. 
                 
                I’m sure someone will weigh-in sooner or later on server side rendering and how awesomely fast it is. But it is extremely expensive to deploy and maintain and so far seems to only be affordable by the academic centers and very large community hospital IDN’s. I’m sure over time the cost will come down but for now it’s still considered a Ferrari type technology when most folks only need (and can afford) a Honda Accord or Toyota Camry, which by my standards offer a great value proposition! 
                 
                 

                • ggaspar

                  Member
                  December 2, 2020 at 5:37 am

                  Well said, RISPACS Guy. Appreciate the clarity on the topic.

          • Unknown Member

            Deleted User
            January 4, 2021 at 9:11 am

            There is no shortage of lipstick going around out there, LOL.

  • drmakyuz

    Member
    September 15, 2020 at 6:40 am

    Many of the same issues apply if you are considering a multi-site PACS or Cloud.  Anytime you have a wide area network pipe between core servers and workstations, you need to really look at how the solution is implemented and the performance of the pipes in between.
    – It means really understanding the data flow
    – It means doing the measurements
    If you do it right, then you can deliver location independent service, which used to seem like a luxury until we got hit with Covid, Forest Fires and Hurricanes all in the same time. 
     
     

    • SN242012000

      Member
      September 15, 2020 at 12:27 pm

      I think you said it best, Paul — “There are some PACS that create friction. And then there are others that are frictionless.”
       
      Select at your own peril.
       
       

  • Unknown Member

    Deleted User
    October 28, 2020 at 3:40 pm

    It’s not “the cloud” per se that’s the issue. It’s about how your PACS is designed. As others have mentioned, this is similar to using a PACS that has its storage and servers remote from you. Some PACS are designed to work flawlessly in these settings. Others are mediocre to poor. A word to the wise. Just because a vendor can show you one of their installations working in the cloud, make sure you verify that it will work in your setting, with potentially different firewalls, VPNs, connectivity, latency issues, etc. 
     

    • obebwamivan_25

      Member
      November 1, 2020 at 6:39 pm

      Does anyone have experience at using Microsoft Azure? To me, that is an interesting option, where the cloud plays a role, operating system managed by someone else (Microsoft presumably), and if changes happen to an individual’s workspace, it can be recreated without affecting the enterprise.  I dont’ know how images move in and out to that space, though.

      • eveisenb

        Member
        November 4, 2020 at 3:13 pm

        Microsoft has recently announced a [link=https://techcommunity.microsoft.com/t5/healthcare-and-life-sciences/introducing-the-medical-imaging-server-for-dicom/ba-p/1694397]DICOMweb server[/link] that uses Azure. I’ve heard some reports of people playing around with it, but no one who is using it seriously yet.
         
        That said, using a cloud vendor (Azure, GCP, AWS or other) is very different from using a product that runs in the cloud from an imaging vendor. There is a lot of additional work needed to ensure a secure, scalable, resilient solution. And that imaging vendor will have done a thorough evaluation of what approach to the cloud, or which cloud vendor supports the whole solution (or their product line) best.
         
        So far, my impression of the cloud vendors’ DICOM offerings is that they are largely proof-of-concept demonstrations or appropriate for tasks like AI research.

        • mrh123

          Member
          November 8, 2020 at 9:59 am

          As Elliot mentioned, Azure requires DICOMWeb Access, which is OK for retrieval (assuming your PACS supports it), but not many PACS systems support it as an export. No clue why they did not implement “traditional” DICOM push, but that might be related to them not being familiar with the medial imaging space.