[This article was first published here – https://www.linkedin.com/pulse/power9-ibm-cloud-private-pradipta-banerjee/]
“It is the harmony of the diverse parts, their symmetry, their happy balance; in a word it is all that introduces order, all that gives unity, that permits us to see clearly and to comprehend at once both the ensemble and the details. ” – Henri Poincare
Whether it’s virtual assistants like Siri and Google Assistant or areas like gaming, driverless vehicles or fraud detections, AI is everywhere today. If you see around you, whether it’s your smartphone, your bank, online shopping site or your home; all use AI in some form or the other. Every business is looking to leverage AI and seems like AI has encapsulated the whole gamut of virtual operations under its mighty wings.
Demands of AI is putting the focus back on hardware, understandably. The following article on technology requirements for Deep and Machine Learning, touches upon some of the hardware aspects like cache and memory capacity, memory bandwidth requirements and accelerators (GPUs and FPGAs) and how these components impact AI.
Increasingly, we are seeing AI related benchmarks comparing training times on different hardware, which is key for data scientists. It’s very likely that benchmark results for AI frameworks might soon become very common among different hardware vendor, and who knows we might soon have a SPEC GPU benchmark.
Having said that, choice of software platform is equally important to provide the needed agility for data scientists and developers. Data scientists are increasingly adopting containers to improve their workflows by realising container benefits like dependency management, reproducible artefacts etc. In an enterprise AI setup, you will also need functionalities like optimal resource utilisation, self-service, resource controls, CI/CD pipelines, security, governance among other things.
The key aspect here is that there is a need for hardware and software solution that is designed for AI – the harmony of the diverse parts, their symmetry, and their happy balance!!
It gives me immense pleasure to share with you a complete hardware and software stack that is purposefully built for AI – Power 9 servers with IBM Cloud Private.
Power9 processor is built from the ground-up for enterprise AI. It’s the only processor with state-of-the-art I/O subsystem technology, including next generation NVIDIA NVLink, PCIe Gen4, and OpenCAPI. Read more about the specs here.
IBM Cloud Private (ICP), built on open source technologies, including Kubernetes, is an enterprise-grade private cloud platform behind your firewall for application development. We had the first release of Cloud Private last year in October and from then on there is no looking back. The latest version of IBM Cloud Private with increased focus on security, compliance and scalability was released on 25th May 2018.
This was made possible only after the stupendous effort that was put together by a cluster of highly competent and sincere folks. Fine-tuning the entire software stack to perform optimally on the Power9 servers, leveraging the capabilities of the hardware to the fullest extent were some of the driving principles. The hard work, several late nights and weekend-shifts by everyone involved made this happen.
I was fortunate to be part of this exciting journey and looking forward to seeing this being used to solve your business problems.
Checkout the following references to learn more:
- Faster training times with Power9 and NVIDIA GPUs
- Enterprise AI with IBM Cloud Private and Power servers
- Setting up CI/CI pipeline using IBM Cloud Private
- Why hardware matters in the cognitive enterprise
- IBM Cloud Private on Power9 performance
Please feel free to connect with me for any questions.