Position Summary, Responsibilities and Expectations:
* Design, develop, and maintain machine learning models for use cases such as prediction, classification, and optimization
* Build, optimize, and automate data pipelines using Python, PySpark, and SQL within Fabric and Snowflake environments
* Perform exploratory data analysis (EDA) and feature engineering to improve model performance and interpretability
* Deploy, monitor, and maintain production-level models to ensure scalability, reliability, and accuracy
* Collaborate with business, product, and engineering teams to translate analytical insights into actionable solutions
* Participate in AI and Large Language Model(LLM) initiatives, integrating language or generative models into workflows
* Work with global, cross-functional teams across different time zones, requiring flexibility in working hours
* Demonstrate a self-driven, proactive, and detail-oriented mindset, with strong ownership and adaptability in a fast-paced environment
Essential Skills and Experience:
* Master’s degree in Data Science, Computer Science, Mathematics, Statistics, Finance, or a related field
* Minimum 3+ years of hands-on experience in data science, analytics, or machine learning roles
* Strong proficiency in Python (Pandas, NumPy, Scikit-learn) and solid understanding of machine learning workflows
* Experience with SQL and PySpark for large-scale data processing
* Familiarity with modern data platforms such as Snowflake and Fabric
* Experience with Git-based version control systems (e.g., GitHub or Bitbucket)
* Knowledge of AI applications, including prompt-based systems and exposure to Large Language Models (LLMs)
* Strong English communication skills, with the ability to collaborate effectively with global teams and present insights clearly
* Ability to work flexible hours to support collaboration across different time zones