- Parallel computing speeds up data crunching. Projects that took hours now take seconds.
- It's the fundamental concept that, along with advanced semiconductors, has ushered in the AI boom.
- Increased availability of advanced chips has made parallel computing more accessible.
At a data science conference in midtown Manhattan in early November, the Nvidia exhibitor table had a constant swarm of eager people around it. They weren't angling for jobs or selfies. By and large, they were nerding out on the possibilities of parallel computing.
This is the fundamental concept that has catapulted Nvidia to become the world's most valuable company. And it was on show at the Pydata conference during a short demo conducted by Nvidia engineering manager Rick Ratzel.
Nvidia makes graphics processing units, or GPUs, which are computer chips that handle many tasks simultaneously. Hence the term parallel computing.
The chips that most people know are central processing units, or CPUs. You'll find them in your laptop handling a wide range of tasks. While quick and efficient, they generally handle these tasks one at a time in a prescribed order.
GPUs are perfect for the massive data-crunching that's needed to build and run AI models such as OpenAI's GPT-4, the computing brain behind ChatGPT.
But before ChatGPT burst on the scene in late 2022, parallel computing already had the potential to turbocharge the kind of data science that serves up relevant internet ads, optimizes supply-chain decisions, and attempts to detect online fraud.
This is why Nvidia has had a long relationship with Pydata, a conference for developers who use the Python coding language to do data analysis.
This year, Ratzel was there to introduce a new software tie-up with Nvidia for Python developers using common open-source data management tools.
He started with a data set of movie reviews and numerical ratings. The aim was to make good recommendations. Ratzel had to match the taste of movie reviewers as closely as possible with a person's taste. The math to determine who had similar tastes wasn't that complicated. But the calculations involved large amounts of data on 330,000 users.
"It's giant," he said.
Running the initial analysis took two hours on a traditional computer with a CPU, he said. A few tweaks got that down to one hour.
Then Ratzel switched to a GPU and ran the analysis again. He got it done in less than two seconds. That speed comes from the parallel computing that the GPU enables.
The concept has been around since the 1980s, but until relatively recently, the capability to actually perform parallel computing was hard to access. The rise of GPU availability through cloud providers has made it easier for eager data scientists to complete their own projects in seconds rather than hours.
With so much time saved, researchers can run many more experiments and take on a lot more projects.
"You can see how this changes how you work," Ratzel said. "Now I can try lots of things, do lots of experimenting, and I'm using the exact same data and the exact same code."
The computations GPUs perform to enable generative AI are much more complex and voluminous than using existing, structured data to recommend movies based on common characteristics and preferences.
This immense volume of computations is what has driven so much demand for Nvidia GPUs. And that, in turn, has made Nvidia's business so valuable to investors.
from Business Insider https://ift.tt/cWRTViS
No comments:
Post a Comment