Design, build, and maintain scalable and reliable data pipelines using Python and workflow orchestration tools like Apache Airflow.
Develop and optimize ETL/ELT workflows to ingest, transform, and load data from multiple sources such as REST APIs, databases, and cloud storage (AWS S3).
Design and manage data models, handle schema migrations, and optimize complex queries across Amazon Redshift, PostgreSQL, and MySQL.
Ensure high standards of data quality, integrity, and availability to support analytics, reporting, and machine learning use cases.
Drive the adoption of AI/ML capabilities within the team, leveraging prior experience in data science projects.
Take ownership of building and evolving ML-ready data pipelines and workflows.
Contribute to the end-to-end AI/ML lifecycle, including data preparation, feature engineering, model support, and deployment enablement.
Introduce and help the team adopt modern tools and technologies in AI/ML and data engineering.
Identify opportunities to apply predictive analytics and machine learning to solve business problems.
Monitor, troubleshoot, and resolve data pipeline failures and performance bottlenecks.
Maintain clean, version-controlled code using Git, and follow best practices through peer code reviews.
Document data pipelines, workflows, and system architecture for maintainability and knowledge sharing.
Official notification
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.