Qualifications: - Total 10+ years of experience across Data Engineering, Data Warehousing, ETL, Snowflake, Snowpark, pyspark/python - 6+ years of experience in Snowflake development, Hadoop , Hive & Spark framework. Good Knowledge of Snowpark - Strong Python, Snowpark Development and SQL knowledge. - Certification in Snowflake or Big Data is preferred.- Good to have experience in SAS. o Responsibilities: Experience with big data processing and distributed computing systems like Spark. Implement ETL pipelines and data transformation processes. Ensure data quality and integrity in all data processing workflows. Troubleshoot and resolve issues related to Snowpark/PySpark applications and workflows. Understand source, dependencies and data flow from converted Snowpark/PySpark code. Strong programming skills in Python and SQL. Experience with big data technologies like Hadoop, Hive, and Kafka. Understanding of data warehousing concepts and relational databases like SQL. Demonstrate and document code lineage. Ensure compliance with data security, privacy regulations, and organizational standards. Knowledge of CI/CD pipelines and DevOps practices. Strong problem-solving and analytical skills. Excellent communication and leadership abilities."