To qualify for the role, you must have
Minimum 3 years of relevant work experience, with at least 1 year in designing and maintaining data pipelines, ETL processes and database architectures.
Bachelor’s degree (B.E./B.Tech) in Computer Science or IT, or Diploma in Data Science, Statistics, or related field.
Strong proficiency in SQL, Python, or Scala for data manipulation, automation and pipeline development.
Experience working with big data processing frameworks such as Apache Spark, Hadoop, or Kafka.
Hands-on experience with cloud-based data platforms such as AWS (Redshift, Glue), Google Cloud (Big Query, Dataflow), or Azure (Synapse, Data Factory).
Any question or remark? just write us a message
If you would like to discuss anything related to payment, account, licensing,
partnerships, or have pre-sales questions, you’re at the right place.