Candidates ready to join immediately can share their details via email for quick processing. CCTC | ECTC | Notice Period | Location Preference - Act fast for immediate attention! ⏳ Mandatory Skills: Strong hands-on experience in PySpark & GCP (or any other cloud platform) . Expertise in designing, implementing, and optimizing data pipelines using Spark, Hadoop, and Hive . Proficiency in SQL for data transformation and querying. Experience in monitoring and troubleshooting data pipelines to ensure minimal downtime and optimal performance. Strong problem-solving skills with the ability to work in a collaborative team environment. Excellent communication skills , both written and verbal. Good-to-Have Skills: Experience working with big data technologies beyond Spark and Hadoop. Knowledge of data governance, data security, and data stewardship practices . Exposure to containerization (Docker, Kubernetes) and CI/CD for data pipelines . Experience with streaming technologies like Kafka .
Job Title
Senior Data Engineer – PySpark, GCP, Spark, Hadoop, Hive, SQL - Kolkata