Overview

 

 

Data Scientists Remote Long Term Must Have Skills – Total IT exp – 10+ years Python – 6+ Yrs of Exp – Pyspark –6+ Yrs of Exp – Pytorch –6+ Yrs of Exp – GCP –3 + Yrs of Exp – Web development – Prior experience 3+ Years Docker – 4+ Years KubeFlow – 4+ Years

Key Responsibilities: • Work with AI/ML Platform CLIENTment team within the eCommerce Analytics team. The broader team is currently on a transformation path, and this role will be instrumental in enabling the broader team’s vision. • Work closely with other Data Scientists to help with production models and maintain them in production. • Deploy and configure Kubernetes components for production cluster, including API Gateway, Ingress, Model Serving, Logging, Monitoring, Cron Jobs, etc. Improve the model deployment process for MLE for faster builds and simplified workflows • Be a technical leader on various projects across platforms and a hands-on contributor of the entire platform’s architecture • Responsible for leading operational excellence initiatives in the AI/ML space which includes efficient use of resources, identifying optimization opportunities, forecasting capacity, etc. • Design and implement different flavors of architecture to deliver better system performance and resiliency. • Develop capability requirements and transition plan for the next generation of AI/ML CLIENTment technology, tools, and processes to CLIENT Walmart to efficiently improve performance with scale. Tools/Skills (hands-on experience is must): • Ability to transform designs ground up and lead innovation in system design • Deep understanding of GenAI applications and NLP field • Hands on experience in the design and development of NLP models • Experience in building LLM-based applications • Design and development of MLOps pipelines • Fundamental understanding on the data science parameterized and non-parameterized algorithms. • Knowledge on AI/ML application lifecycles and workflows. • Experience in the design and development of an ML pipeline using containerized components. • Have worked on at least one Kubernetes cloud offering (EKS/GKE/AKS) or on-prem Kubernetes (native Kubernetes, Gravity, MetalK8s) • Programming experience in Python, Pyspark, Pytorch, Langchain, Docker, Kubeflow • Ability to use observability tools (Splunk, Prometheus, and Grafana ) to look at logs and metrics to diagnose issues within the system. • Experience with Web development Education & Experience: – • 6+ years relevant experience in roles with responsibility over data platforms and data operations dealing with large volumes of data in cloud based distributed computing environments. • Graduate degree preferred in a quantitative discipline (e.g., computer engineering, computer science, economics, math, operations research). • Proven ability to solve enterprise level data operations problems at scale which require cross-functional collaboration for solution development, implementation, and adoption. Notes: We are looking for a data scientist who can contribute to the following domains.Design and development of GenAI applications Deeper understanding of the NLP field. Hands on experience in the design and development of NLP models Experience in building LLM-based applications.Design and development of MLOps pipelines Fundamental understanding on the data science parameterized and non-parameterized algorithms. Knowledge on AI/ML application lifecycles and workflows. Experience in the design and development of an ML pipeline using containerized components. Skills: Python, Pyspark, Pytorch, Langchain, GCP, Web development, Docker, KubeFlow

Company:

SnapX.ai

Qualifications:

Language requirements:

Specific requirements:

Educational level:

Level of experience (years):

Senior (5+ years of experience)

How to apply:

Please mention NLP People as a source when applying

https://www.linkedin.com/jobs/view/remote-data-scientists-with-gcp-at-snapx-ai-3841509059?trk=public_jobs_topcard-title

Tagged as: , , ,