A lot of folks ask us "what do you guys do exactly?". And I'll admit that sometimes it can be a difficult question to answer. One thing that we usually explain is that we do unicorn consulting. That is to say, that Providentia shines in doing those sorts of niche situations where the client is literally the only one in the world with this problem. A lot of consulting firms shy away from these sorts of problems because they can be very difficult to scope and the knowledge-gain from solving them is not readily reusable.
Fair enough.
But one thing that is absolutely rewarding for taking on these tremendous problems is that our team has depth in a wide range of areas that are themselves considered "niche". And that makes us the perfect choice when trying to do something that's never been tried before.
Solana is trying to change the world by creating the most performant blockchain. They understand that embarking on such a challenge requires expertise not just in blockchain design and scaling but also in infrastructure deployment, design, and middleware technologies at scale. In conjunction with Kudelski Security, the Solana team asked Providentia Worldwide to provide a detailed analysis of their system to help suss out issues as they grow and to help them become what they want to. We were proud to rise to the challenge.
So, what do we do? Take a read for yourself ... Solana is kind enough to make their full audit available online. The details inside are the kinds of tests and analysis we perform, as well as the mitigation proposals and strategies.
Contact us so we can help you too.
Machine Learning at HPC User Forum: Drilling into Specific Use Cases
September 22, 2017 by Arno Kolster
The 66th HPC User Forum held this month in Milwaukee focused on the latest trends in modern computing – deep learning, machine learning and AI – and some common themes became obvious: First, that ML and DL are focused currently on specific, rather than general, use cases and second, that ML and DL need to be part of an integrated workflow to be effective.
This was exemplified by Dr. Maarten Sierhuis from Nissan Research Facility Silicon Valley with his presentation “Technologies for Making Self-Driving Vehicles the Norm.” One of the most engaging talks, Dr. Sierhuis’s multi-media presentation on the triumphs and challenges facing Nissan while developing its self-driving vehicle program showcased that machine and deep learning “drives” the autonomous vehicle revolution.
The challenge that Nissan and other deep learning practitioners face is that current deep learning algorithms are programmed to learn to do one thing extremely well – the specific use case: image recognition of stop signs for example. Once an algorithm learns to recognize stop signs, the same amount of discrete learning must apply for every other road sign a vehicle may encounter. To create a general-purpose “road sign learning algorithm”, not only do you need a massive amount of image data (in the tens of millions of varied images), but also the compute to power the learning effort.
Dr. Weng-Keen Wong from the NSF echoed much the same distinction between the specific and general case algorithm during his talk “Research in Deep Learning: A Perspective From NSF” and was also mentioned by Nvidia’s Dale Southard during the disruptive technology panel. Arno Kolster from Providentia Worldwide in his presentation “Machine and Deep Learning: Practical Deployments and Best Practices for the Next Two Years” claimed as well that general purpose learning algorithms are obviously the way to go, but are still some time out.
Nissans’s Dr. Sierhuis went on to highlight some challenges computers still face which human drivers take for granted. For example, what does an autonomous vehicle do when a road crew is blocking the road in front of it? As a human driver, we’d simply move into the opposite lane to “just go around”, but to algorithms, this breaks all the rules: Crossing a double line, checking the opposite lane for oncoming traffic, shoulder checking, ensuring no crossing pedestrians, etc. All need real-time re-programming for the encountering vehicle and other vehicles that arriving at the obstacle.
Nissan proposes an “FAA-like” control system, but the viability of such a system remains to be seen. Certainly, autonomous technologies are integrating slowly into new cars to augment human drivers but a complete self-driving vehicle won’t appear in the marketplace overnight -cars will continue to function in a hybrid mode for some time. Rest assured, many of today’s young folks likely will never learn how to drive (or ask their parents to borrow the car on Saturday night).
This algorithmic specificity spotlights the difficulty of integrating deep learning into an actual production workflow.
Tim Barr’s (Cray) “Perspectives on HPC-Enabled AI” showed how Cray’s HPC technologies can be leveraged for Machine and Deep Learning for vision, speech and language. Stating that it all starts with analytics, Mr. Barr illustrated how industries such as Daimler improve manufacturing processes and products by leveraging deep learning to curtail vehicle noise and reduce vibration in its newest vehicles. Nikunj Oza from NASA Ames gave examples of machine learning behind aviation safety and astronaut health maintenance in “NASA Perspective on Deep Learning.” Dr. Oza’s background in analytics brought a fresh perspective to the proceedings and showcased that machine learning from history has earned a real place alongside modeling for industrial best practices.
In the simulation space, a fascinating talk from the LLNL HPC4Mfg program was William Elmer’s (LLNL) discussion of Proctor & Gamble’s “Faster Turnaround for Multiscale Models of Paper Fiber Products.” Simulating various paper product textures and fibers greatly reduce the amount of energy from drying and compaction. Likewise, Shiloh Industries’ Hal Gerber described “High Pressure Casting for Structural Requirements and The Implications on Simulation.” Shiloh’s team leverages HPC for changing vehicle structure — especially in creating lighter components with composites like carbon fiber and mixed materials.
It’s clear from the discussion that machine learning and AI are set to be first class citizens alongside traditional simulation within the HPC community in short order. While still unproven and with a wide variety of new software implementations, HP Labs presented a first-of-its-kind analysis of ML benchmarking on HPC Platforms. Hewlett Packard Labs’ Natalia Vassilieva’s “Characterization and Benchmarking of Deep Learning” showcased the “Book of Recipes” HP Labs is developing with various hardware and software configurations. Fresh off their integration of SGI technology into their technology stack, the talk not only highlighted the newer software platforms which the learning systems leverage, but demonstrated that HPE’s portfolio of systems and experience in both HPC and hyper scale environments is impressive indeed.
Graham Anthony, CFO of BioVista spoke on the “Pursuit of Sustainable Healthcare Through Personalized Medicine With HPC.” Mr. Anthony was very passionate about the work BioVista is doing with HPE and how HPC and deep learning change the costs of healthcare by increased precision in treatment through deriving better insights from data. BioVista takes insight from deep learning and feeds that into simulations for better treatments – a true illustration that learning is here to stay, and works hand in hand with business process flows for traditional HPC.
In his talk entitled “Charliecloud: Containers are Good for More Than Serving Cat Pictures?” Reid Priedhorsky from LANL covered a wide range of topics including software stacks, design philosophy and demoed Charliecloud which enables execution of docker containers on supercomputers.
The tongue-in-cheek title about cat pictures being synonymous with deep learning image recognition is not by accident. Stand-alone image recognition is really cool, but as expounded upon above, the true benefit from deep learning is having an integrated workflow where data sources are ingested by a general purpose deep learning platform with outcomes that benefit business, industry and academia.
From the talks, it is also clear that Machine Learning, Deep Learning and AI are presently fueled more by industry than by academia. This could be due to strategic and competitive business drivers as well as the sheer amount of data that companies like Facebook, Baidu and Google have available to them driving AI research and deep learning-backed products. HPC might not be needed to push these disciplines forward and is likely why we see this trend becoming more prevalent in everyday news.
There was obvious concern from the audience about a future where machines rule the world. Ethical questions of companies knowingly replacing workers with robots or AI came up in a very lively discussion. Some argued that there is a place for both humans and AI — quieting the fear that tens of thousands of people would be replaced by algorithms and robots. Others see a more dismal human future with evil and malevolent robots taking control and little left for humans to do. These are, of course, difficult questions to answer and further debates will engage and entertain everyone as we keep moving toward an uncertain, technical future.
On a lighter note, Wednesday evening’s dinner featured a local volunteer docent, Dave Fehlauer, giving an enjoyable, informative talk on Captain Frederick Pabst: his family, his world and his well-known Milwaukee staple, The Pabst Brewing Company.
By all accounts, this was one of the most enjoyed HPC User Forums meetings. With a coherent theme and a dynamic range of presentations, the Forum kept everyone’s interest and showcased the realm of possibilities within this encouraging trend of computing, both from industry and academic research perspectives.
The next domestic HPC User Forum will be held April 16-18, 2018 at the Loews Ventana Canyon in Tucson, Arizona. See http://hpcuserforum.com for further information.
About the Author
Arno Kolster is Principal & Co-Founder of Providentia Worldwide, a technical consulting firm. Arno focuses on bridging enterprise and HPC architectures and was co-winner of IDC’s HPC Innovation Award with his partner Ryan Quick in 2012 and 2014. He was recipient of the Alan El Faye HPC Inspiration Award in 2016. Arno can be reached at