Software

Databases

Snowflake's finding NeMo to train custom AI models

Submerge an Nvidia LLM 20,000 leagues under the data lake


Snowflake Summit Nvidia and cloud data warehouse company Snowflake have teamed up to help organizations build and train their own custom AI models using data they have stored within Snowflake's platform.

Announced at the Snowflake Summit in Las Vegas, the move will see Nvidia's NeMo framework for developing large language models (LLMs) integrated with Snowflake to allow companies to use the data in their Snowflake accounts to make custom LLMs for generative AI services, with chatbots, search and summarization listed as potential uses.

One of the advantages of this that is being touted by the two companies is that customers can build or customize LLMs without having to move their data, which means that any sensitive information can remain safely stored within the Snowflake platform.

However, Snowflake is provided as a self-managed service that customers deploy to a cloud host of their choice, and as NeMo has been developed to take advantage of Nvidia's GPU hardware to accelerate the AI processing, this requires customers to ensure that their cloud provider supports GPU-enabled instances to make this all possible.

It also isn't clear if NeMo is being made a standard part of Snowflake, or if the two will have to be licensed as separate packages. We will update this article if we get an answer.

This new partnership is unashamedly jumping on the LLM bandwagon following the surge of interest in generative AI models caused by ChatGPT, dubbed the "the iPhone moment of AI" by Nvidia CEO Jensen Huang.

But according to Nvidia VP for Enterprise Computing Manuvir Das, what this partnership with Snowflake allows is for LLMs to be endowed with the skills needed for such AI algorithms to fulfill their function within an organization.

"A large language model is basically trained with a lot of data from the internet. And then it is endowed with certain skills. And you can really think of that LLM as like a professional employee in a company. And a professional employee has two things at their disposal. One, they have a lot of knowledge that they've acquired, and the other is they have a set of skills, things they know how to do," Das said.

"So when you take an LLM, essentially, it's like having a new hire into your company, a student straight out of Harvard, for example.

"If you think about it from the company's point of view, you would really like to have not just this new hire, but an employee who's got 20 years of experience of working at your company. They know about the business of your company, they know about the customers, previous interactions with customers, they have access to databases, they have all of that knowledge."

Inserting the model making engine that is NeMo into Snowflake is intended to let customers take foundation models, and train them and fine tune them with the data that they have in their Snowflake Data Cloud so they gain those skills, or they can just start from the ground up and train a model from scratch, Nvidia said. Either way, they end up with a model unique to them that is also stored in Snowflake.

The NeMo framework features pre-packaged scripts and reference examples, and also provides a library of foundation models that have been pre-trained by Nvidia, according to Das.

Snowflake chairman and CEO Frank Slootman said in a statement that the partnership brings Nvidia's machine learning capabilities to the vast volumes of proprietary and structured enterprise data stored by Snowflake users, which he described as "a new frontier to bringing unprecedented insights, predictions and prescriptions to the global world of business." ®

Send us news
Post a comment

Nvidia's accelerated cadence spells trouble for AMD and Intel's AI aspirations

Or it could, just as soon as they figure out how to make the networking work

Nvidia sells Foxconn on AI factories that turn raw data into profit

Or, at least, information that can drive profits anyway

Ampere leads a pack of AI startups to challenge Nvidia's 'unsustainable' GPUs

AI Platform Alliance probably has Jensen Huang in tears...of laughter

UK data watchdog warns Snap over My AI chatbot privacy issues

Plus: 4channers are making troll memes with Bing AI, and more

Biden has brought the ban hammer down on US export of AI chips to China

Datacenter GPUs and some consumer cards now exceed performance limits

AI processing could consume 'as much electricity as Ireland'

Boffins estimate power needed if every Google search became an LLM interaction. Gulp

Microsoft reportedly runs GitHub's AI Copilot at a loss

Redmond willing to do its accounts in red ink to get you hooked

Hyperscale datacenter capacity set to triple because of AI demand

And it's going to suck... up more power too

Samsung nabs contract to produce 3nm server chips for mystery US biz

Parts reportedly geared toward high-performance compute and leverage advanced packaging

AI girlfriend encouraged man to attempt crossbow assassination of Queen

21-year-old jailed for nine years after he was egged on by Replika bot

How 'AI watermarking' system pushed by Microsoft and Adobe will and won't work

Check for 'cr' bubble in pictures if your app supports it, or look in the metadata if it hasn't been stripped, or...

Google offers some copyright indemnity to users of its generative AI services

'If you are challenged, we will assume responsibility'