Company Introduction
At BID Operations, we are passionate about supporting our clients in their journey towards success. Our mission is to empower you to thrive by handling the essential yet time-consuming aspects of your business operations, allowing you to concentrate on strategic growth and innovation.
Job Summary
As a Data Engineer, you will play a pivotal role in designing, developing, and maintaining scalable data pipelines, leveraging technologies such as Kafka, RabbitMQ, Spark, Airflow and ClickHouse within cloud environments. You will ensure the efficient and reliable flow of data across our systems. Your expertise will drive the optimisation of our data architecture, enabling insightful analytics and supporting our data-driven decision-making processes.
Job Responsibilities
- Design and implement highly scalable, reliable, and performant data pipelines to support ETL processes, integrating technologies such as Kafka, Spark, ClickHouse, and RabbitMQ.
- Work within cloud environments to deploy and manage data services, ensuring best practices in security, scalability, and efficiency.
- Develop and maintain robust data storage solutions, optimising data storage and retrieval processes in ClickHouse or similar technologies.
- Collaborate with data analysts and other stakeholders to understand data needs and implement systems that support data analysis and reporting.
- Monitor, troubleshoot, and optimise data pipelines, identifying and resolving performance bottlenecks and ensuring data quality and integrity.
- Participate in the design and implementation of data models and schemas that support business processes and objectives.
- Stay abreast of industry trends and advancements in data engineering technologies and methodologies, continuously seeking ways to improve our data systems.
- Ensure compliance with data governance and security policies.
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proven experience in data engineering, with a strong background in designing and implementing ETL processes within cloud environments.
- Strong programming skills in Python, with experience in developing robust, maintainable, and scalable data processing pipelines.
- Extensive SQL knowledge and experience.
- Excellent problem-solving skills and the ability to work collaboratively in a team environment.
- Strong communication skills, with the ability to convey complex technical concepts to non-technical stakeholders.
Benefits
- Competitive salary commensurate with experience.
- Opportunities for professional development and career advancement.
- Collaborative and supportive work environment.
- Flexibility in smart casual dress code.