ABOUT
The last few years have seen many companies rush to the cloud as part of their "cloud first" or "cloud native" strategies. But after the initial enthusiasm there is an increase in cloud repatriation where those same companies are looking to move back on premises as the realities of using the cloud become better understood. Apart from a few select situations, there is often no clear case to be made to go "all cloud" or "all on premises" for any company that is going through this process. This talk aims to clarify how to make that decision more effectively for the machine learning workloads.
Deciding where to run ML workloads is not always a straightforward decision. You need to factor in the cost, both the explicit and the hidden costs, the complexity and the skills you need to navigate it, the tooling that is available and most importantly, data security and sovereignty. The aim of this talk is to help CTOs and IT managers understand what goes into that decision and help quantify the complexity to make the correct decision.
This talk will cover:
- The types of machine learning workloads and what infrastructure they require
- The economics of on-prem vs. cloud workloads
- The tooling needed by data science and ML teams
- Where the complexity lies in running "on prem" vs. "in the cloud"
If you have already registered, login using your email and password.