• Welcome to the Online Discussion Groups, Guest.

    Please introduce yourself here. We'd love to hear from you!

    If you are a CompTIA member you can find your regional community here and get posting.

    This notification is dismissable and will disappear once you've made a couple of posts.
  • We will be shutting down for a brief period of time on 9/24 at around 8 AM CST to perform necessary software updates and maintenance; please plan accordingly!
K

Kathleen Martin

Guest
Supercomputing has come a long way since its beginnings in the 1960s
Initially, many supercomputers were based on mainframes, however, their cost and complexity were significant barriers to entry for many institutions. The idea of utilising multiple low-cost PCs over a network to provide a cost-effective form of parallel computing led research institutions along the path of high performance computing (HPC) clusters starting with “Beowulf” clusters in the 90’s.
More recently, we have witnessed the advancement of HPC from the original, CPU-based clusters to systems that do the bulk of their processing on Graphic Processing Units (GPUs), resulting in the growth of GPU accelerated computing.
Data and Compute – GPU’s role
 
While HPC was scaling up with more compute resource, the data was growing at a far faster pace. This has presented big data challenges for storage, processing, and transfer.
HPC’s GPU parallel computing has been a real game changer for AI as parallel computing can process all this data, in a short amount of time using GPUs. As workloads have grown, so too have GPU parallel computing and AI machine learning. Image analysis is a good example of how the power of GPU computing can support an AI project. With one GPU it would take 72 hours to process an imaging deep learning model, but it only takes 20 minutes to run the same AI model on an HPC cluster with 64 GPUs.
How is HPC supporting AI growth?
Storage, networking, and processing are important to make AI projects work at scale, this is when AI can make use of the large scale, parallel environments that HPC infrastructure (with GPUs) provides to help process workloads quickly. Training an AI model takes more far more time than testing one. The importance of coupling AI with HPC is that it significantly speeds up the ‘training stage’ and boosts the accuracy and reliability of AI models, whilst keeping the training time to a minimum.
Continue reading: https://technative.io/hpc-ai-working-together-data-overload/
 

Attachments

  • p0005537.m05201.technative.png
    p0005537.m05201.technative.png
    4 KB · Views: 33
  • Like
Reactions: Kathleen Martin