Source: Cloud Computing & DevOps Meetup Bangalore

AI on Cloud

This is a FREE meetup. Participants from different experience levels are invited.

Agenda:

10:00 AM - 10:30 AM - Registration

10:30 AM - 11:15 AM - "Building and Deploying AI/ML models on Cloud using Cloud-Native way" by Vishwanath, GE Oil & Gas

About Vishwanath: https://www.linkedin.com/in/vishwanath-shankar-8233418/

Abstract: The biggest challenge enterprises face these days is to scale and run the AI/ML models developed in house. Also, a hybrid cloud setup makes the challenge even bigger. In this talk, we will be going through the basics of an ML workflow by going through a practical data set, build out an ML model, deploy and scale the model using cloud technologies.

11:15 AM - 11:50 AM - “Thanos: Achieve highly scalable and global monitoring with Prometheus” by Kirti Ranjan Parida, Lead DevOps Engineer at Informatica

About Kirti Ranjan: https://www.linkedin.com/in/kirti-ranjan-parida/

Abstract: In Informatica, cloudtrust supports multiple infrastructures for product teams spanning across 3 major cloud providers (AWS, Azure, GCP) and multiple regions. Hence we need highly scalable and resilient monitoring system which not only provide system uptime but also historical data about systems and application. We also need a single pane of the window to know what is happening with our systems. To solve this, we are using a combination of Prometheus/Thanos/Grafana for monitoring our systems(~2000 Cloud instances,~20 Kubernetes clusters ~3k containers and expect the Kubernetes infrastructure to grow by 5x at the end of 2020 ). Thanos is an open-source project by Improbable to seamlessly transform existing Prometheus deployments in clusters around the world into a unified monitoring system with unbounded historical data storage.

11:50 AM - 12:00 PM - Break & Networking

12:00 PM - 12:45 PM - "Serve ML Models (low-latency prediction systems) at Scale in the Cloud and at Edge" by Srinivasa Rao, Cisco

About Srinivasa: https://www.linkedin.com/in/aravilli/

Abstract: Low Latency prediction serving systems are the key to serve and scale ML/AI applications at the edge and in the cloud. In this talk, I will present various challenges involved in scaling ML applications and serving prediction systems and present open-source frameworks like Clipper, Kubeflow, ml-flow which can be used in order to overcome the challenges. I will walk through a sample classification use case ( Phishing detection ) to deploy, scale and serve using various frameworks and explain challenges in detail.

12:45 PM - 01:00 PM - QA & Networking

Sponsored by - Microsoft
Venue partner - Informatica
----------------------------------------------------------------------------------------------------
Terms & Conditions: You understand and accept that you will abide by the KonfHub's Code Of Conduct (https://konfhub.com/codeofconduct.html).

Newsletter
  • Get the latest DevOps jobs, events and curated articles straight to your inbox, once a week

  • Community Partners