Key Skills: Azure Databricks, Spark Streaming, Python, Pyspark, SQL Job description: • Over 10 years of IT experience in the big data domain, specializing in Hadoop, data lakes, and data engineering using Python and Spark programming languages. • At least 3-4+ years of experience with Databricks workspace, including working with Databricks notebooks, job clusters, Delta Lake, Databricks Lakehouse, and Unity Catalog. • A minimum of 5 years of experience in PySpark and Python development. • Over 6 years of experience in designing and implementing data pipelines. • Proficient in Spark Streaming and its integration into data processing workflows. • Experienced in designing and developing data pipelines, as well as ETL/ELT jobs for data ingestion and processing in data lakes. • Strong expertise in SQL Server, NoSQL, Spark SQL, data modeling, identity & access management, query optimization, and parallel processing. • Proven problem-solving abilities along with excellent verbal and written communication skills. • Familiar with processing streaming data using Kafka and Pub/Sub technologies. • Experienced in Agile development, SCRUM methodologies, and Application Lifecycle Management (ALM). • In-depth knowledge of data warehouse concepts. Role Type: Full Time Work Mode: Work from Office Work Timings: 3 PM- 12 PM (IST) Location: Madhapur, Hi Tech City
Job Title
Databricks Developer :: Full Time :: Hi Tech City :: Hyderabad