Automating generative AI development

GenAI and powerful server hardware supports quick design, build and delivery of new AI applications and models

Sponsored Feature Artificial Intelligence (AI) has dominated the business and technology headlines lately, especially with the emergence of generative AI technologies like ChatGPT. But for companies beginning or expanding their use of AI and generative AI, there remain significant technological challenges, as well as training and procedural hurdles to overcome.

Certainly the number of firms using AI continues to expand across the globe and across virtually all industries. According to the IBM Global AI Adoption Index 2022 report, take up of the technology is flourishing. It calculates that the global AI adoption rate grew steadily in 2022 and is now at 35 percent, a four-point increase from the year before. And another 42 percent of the respondents it surveyed said they are exploring how they might start to use AI.

"Generally speaking, this is a new domain and it's evolving really fast," says Assaf Katan, chief business officer at Israel-based AT developer Deci. "There's a combination of short-term and long-term challenges. In the short term, it's really about choosing your path. 'Do I want to build in-house capabilities, work with open-source models, and tweak them myself; or do I want to work through an API with something like Open AI? Then I don't need to be super proficient, but also my level of control over model performance, and ability to customize it while ensuring data privacy is limited.'"

Those are the initial short-term decisions. Then looking to the longer term, companies have to consider the need to scale and the expense of doing so. "Looking long-term, let's assume a company wants to build a generative AI service. They believe it can improve efficiencies, improve workflows, and it will scale," explains Katan. "They will need to think about the implications of scaling the use of the model that powers their solution. These generative models are enormous with billions of parameters, so the compute power they require for inference is huge. How do you build it so that if you scale, you can still deal with the cost?"

Advancing Artificial Intelligence

The combination of generative AI and more powerful server hardware better suited to processing those workloads is opening new opportunities for companies to quickly design, build, and deliver new AI applications and models. According to the same IBM research, AI is helping companies in a variety of ways. That includes addressing labor and talent shortages by automating repetitive tasks; saving time by automating IT, business, and network processes; saving costs; making operations more efficient, improving IT and network performance; and ultimately, providing a better experience for customers.

Most of the tangible business benefits are centered around efficiency, explains Katan. "One enterprise we're talking to has a large customer base, so there are a lot of customer care and customer support calls," he says. "Today they're using standard non-AI chatbots with a lot of the basic problems. They're planning to use generative AI to cover the vast majority of those calls and leave the rest for human agents to handle."

Katan mentioned another company with which Deci is working that is skillfully using AI to generate content. "They have thousands of different products and SKUs. They use generative AI to write product descriptions," he explains. "Now these product descriptions aren't just a two-liner. They include the product descriptions, technical specifications, and feature information. They're using generative AI to write 90 percent of those product descriptions."

AutoNAC enables efficient deep learning

The Deci deep learning platform is powered by  Automated Neural Architecture Construction (AutoNAC) technology. "Our offering is built out of two main components," explains Katan. "One is the neural network, where our technology helps you build efficient neural network architectures optimized to a specific use case, the hardware they're running on, and set up to meet specific KPIs."

AutoNAC carries out a multi-objective search within a search space of tens of billions of different potential architectures in order to identify the optimal architecture - one that strikes a balance between accuracy, low latency and high throughput, which is tailored for distinct tasks, data characteristics, performance goals, and specific hardware. Deci's AutoNAC has generated some of the world's most efficient computer vision and generative models such as YOLO-NAS, DeciLM 6B, DeciDiffusion among others.

The AutoNAC engine can also predict the accuracy and performance level of the architecture it will generate without actually having to train the model in advance, as is the case with so many AI models. The end result is an automatically compiled AI model developed much more quickly and inexpensively than before, adds Katan: "You're saving money, and providing higher throughput, which means a better user experience".

The other component is Infery, the Deci inference tool. "Infery further optimizes the runtime performance of the neural network to ensure it runs as efficiently as possible on specific hardware," he continues. "We have strong inference tools optimized to different use cases. For generative AI, we build a specific inference tool optimized for these large models because they do behave differently than smaller ones." 

Generative AI models, distinguished by their complex, iterative processes, differ from conventional static models used in tasks like object detection. Traditional optimization tools fall short, requiring specialized tools for these dynamic architectures.

AI  teams today spend many months and resources on developing and optimizing their models with endless trial and error iterations when trying to manually design models yet, only 30% of models make it to production. The Deci platform streamlines model development & optimization, eliminating uncertainty and guaranteeing success in production. With Deci, AI team can reach production in days instead of months.

AI at work

The Deci computer vision and generative AI platform, paired with Lenovo ThinkSystem servers, are being successfully used in a number of industry verticals, including manufacturing, retail, and even agriculture. "Deci is in two primary markets," explains Katan, "computer vision and generative AI." 

The computer vision applications are used in manufacturing and automotive plants for visual inspections to identify defective products, for example. In agriculture, Deci has customers using computer vision cameras to monitor animals' health, size, and growth. For retail, the computer vision is used for automatic checkout. "All that is using good old computer vision-based AI to do analysis and make decisions," he adds.

On the generative AI side, Deci is used by customers in a range of markets serving functions found in any industry. "There is customer care across the board," he says. "Then in financial services, there are more specific use cases like doing credit analysis. Sales and marketing teams are using customer data to build custom sales proposals. They're also doing custom marketing; and not just the content. There's even a visual component using tools like text to image and combining images to create customized marketing packages."  

Strength in numbers

For companies looking to roll out AI systems and models, it can be a daunting and complex process. Working together, Lenovo and Deci hope to simplify the process by having the Deci AI platform and models configured to operate most efficiently on Lenovo servers. That makes it one-stop shopping for companies interested in developing their own AI models.

One aspect of that partnership is Deci's participation in the Lenovo AI Innovators Program. This helps both by getting Deci's deep learning platform and natural language processing models running on Lenovo ThinkSystem servers. The Lenovo AI Innovators program gives Deci access to Lenovo's in-house AI expertise and provides the partners with pre-configured hardware ready to run the AutoNAC platform.

The Lenovo AI Innovators program provides a vast network of partner companies and global reach of 180 countries. Lenovo has also established AI centers of excellence, for partner companies to share expertise and resources.

And it's not just the scope of Lenovo that adds to the partnership, according to Katan. "When you approach a customer, you need to know what the use case is, what is the timing?" he says. "The timing is important. Maybe you have a great product, but it's made available too early or too late. When we work with a company like Lenovo, they know the intended use case and timing better to build out a solution." 

As an example, Katan explains how the combined AutoNAC platform and Lenovo servers would fit into a relatively common use case. "Say you're building a visual inspection solution for your manufacturing plant," he says. "It needs the Lenovo computer. It needs the cameras. It needs storage for the data, and so on. In this situation, we are the AI component." 

Big brother steps in

Lenovo brings the global reach, reliability, and the sales and support capabilities. Deci brings its core technology. "To us, they're like the big brother," says Katan. "Our strength is around the AI technology. So, it's a combination of their reach and customers with our technology. That is our promise to Lenovo. We will always bring the best and strongest AI technology, and we'll keep working and focusing and make it as strong as possible."

The Lenovo ThinkSystems are uniquely paired to support heavy compute functions like AI calculations. "When you think of an enterprise that wants to use generative AI, and build a service that will be able to scale, the costs are enormous," explains Katan. "And these models are just getting bigger. The bigger the model, the more compute it requires and consumes, so building models that consume less compute means less hardware expense."

The Deci partnership with Lenovo is relatively new, having started in early 2023, and Katan is eagerly looking ahead. "From our end, we will keep expanding our support matrix," adds Katan. "We'll expand the type and breadth of use cases we support. The partnership with Lenovo just started recently, so we've been doing technology validation to help the Lenovo people better understand our platform and value proposition."

Sponsored by Lenovo.

More about

More about

More about

TIP US OFF

Send us news