By navigating our site, you agree to allow us to use cookies, in accordance with our Privacy Policy.

HPE Introduces Advanced Set of Artificial Intelligence Platforms and Services

HPE New offerings streamline data scientists, developers and IT departments to rapidly deploy and scale deep learning models

HPE Accelrating

Hewlett Packard Enterprise (HPE) announces new purpose-built platforms and services capabilities to help companies simplify the adoption of Artificial Intelligence, with an initialfocus on a key subset of AI known as deep learning.

To take advantage of deep learning, enterprises need a high performance compute infrastructure to build and train learning models that can manage large volumes of data to recognize patterns in audio, images, videos, text and sensor data.

HPE believes today many organizations lack several integral requirements to implement deep learning, including expertise and resources; sophisticated and tailored hardware and software infrastructure; and the integration capabilities required to assimilate different pieces of hardware and software to scale AI systems.

To help customers overcome these challenges and realize the potential of AI, HPE is announcing the following offerings:

  • HPE’s Rapid Software Development for AIHPE introduced an integrated hardware and software solution, purpose-built for high performance computing and deep learning applications. Based on the HPE Apollo 6500 system in collaboration with Bright Computing to enable rapid deep learning application development, this solution includes pre-configured deep learning software frameworks, libraries, automated software updates and cluster management optimized for deep learning and supports NVIDIA Tesla V100 GPUs.
  • HPE Deep Learning Cookbook: Built by the AI Research team at Hewlett Packard Labs, the deep learning cookbook is a set of tools to guide customers in selecting the best hardware and software environment for different deep learning tasks. These tools help enterprises estimate performance of various hardware platforms, characterize the most popular deep learning frameworks, and select the ideal hardware and software stacks to fit their individual needs. The Deep Learning Cookbook can also be used to validate the performance and tune the configuration of already purchased hardware and software stacks. One use case included in the cookbook is related to theHPE Image Classification Reference Designs. These reference designs provide customers with infrastructure configuration soptimized to train image classification models for various use cases such as license plate verification and biological tissue classification. These designs are tested for performance and eliminate any guesswork, helping data scientists and IT to be more cost-effective and efficient.
  • HPE AI Innovation Center: Designed for longer term research projects, the innovationcenter will serve as a platform for research collaboration between universities, enterprises on the cutting edge of AI researchand HPE researchers. The centers, located in Houston, Palo Alto, and Grenoble, will give researchers for academia and enterprises access to infrastructure and tools to continue research initiatives.
  • Enhanced HPE Centers of Excellence (CoE): Designed to assist IT departments and data scientists who are looking to accelerate their deep learning applications and realize better ROI from their deep learning deployments in the near term, the HPECoE offer select customersaccess to the latest technology and expertise including the latest NVIDIA GPUs on HPE systems. The current CoE are spread across five locations including Houston; Palo Alto; Tokyo; Bangalore, India;and Grenoble, France.

“We live in a world today where we’re generating copious amounts of data, and deep learning can help unleash intelligence from this data,” said Pankaj Goyal, vice president, Artificial Intelligence Business, Hewlett Packard Enterprise. “However, a ‘one size fits all’ solution doesn’t work. Each enterprise has unique needs that require a distinct approach to get started, scale and optimize its infrastructure for deep learning. At HPE, we aim to make AI real for our customers no matter where they are in their journeys with our industry-leading infrastructure portfolio, AI expertise, world-class research and ecosystem of partners.”

In its mission to help make AI real for its customers, HPE offers customers a flexible consumption services for HPE infrastructure, which avoids over-provisioning, increases cost savings and scales up and down as needed to accommodate the needs of deep learning deployments.

“Artificial intelligence has the ability to transform scientific data analysis, making predictions and surprising connections,” said Paul Padley, professor of physics and astronomy, Rice University. “We are at a precipice where the AI revolution can now have a profound impact on reshaping innovation, science, education and society, at large. Access to the HPE AI innovation centers will help us continue to advance our research efforts in our journey to making academic progress by using the tools and solutions available to us through HPE.”

Tags

BiS Team

BIS Infotech is a vivid one stop online source protracting all the exclusive affairs of the Consumer and Business Technology. We have well accomplished on delivering expert views, reviews, and stories empowering millions with impartial and nonpareil opinions. Technology has become an inexorable part of our daily lifestyle and with BIS Infotech expertise, millions of intriguers everyday are finding for itself a crony hangout zone.

Related Articles

Upcoming Events