IN EmploymentAlert | Epsilon India | Principal Software Engineer
Skip to Main Content

Job Title


Epsilon India | Principal Software Engineer


Company : Epsilon India


Location : Bharatpur, Rajasthan


Created : 2025-01-07


Job Type : Full Time


Job Description

Principal Software EngineerHUB 2 Building of SEZ Towers, Karle Town Center, Nagavara, Bengaluru, Karnataka, IndiaFull-timeCompany DescriptionWhen you’re one of us, you get to run with the best. For decades, we’ve been helping marketers from the world’s top brands personalize experiences for millions of people with our cutting-edge technology, solutions and services. Epsilon’s best-in-class identity gives brands a clear, privacy-safe view of their customers, which they can use across our suite of digital media, messaging and loyalty solutions. We process 400+ billion consumer actions each day and hold many patents of proprietary technology, including real-time modeling languages and consumer privacy advancements. Thanks to the work of every employee, Epsilon India is now Great Place to Work-Certified™. Epsilon has also been consistently recognized as industry-leading by Forrester, Adweek and the MRC. Positioned at the core of Publicis Groupe, Epsilon is a global company with more than 8,000 employees around the world. For more information, visit /apac or our page.Job DescriptionAbout BUThe Product team forms the crux of our powerful platforms and connects millions of customers to the product magic. This team of innovative thinkers develop and build products that help Epsilon be a market differentiator. They map the future and set new standards for our products, empowered with industry best practices, ML and AI capabilities. The team passionately delivers intelligent end-to-end solutions and plays a key role in Epsilon’s success storyWhy we are Looking for You?At Epsilon, we run on our people’s ideas. It’s how we solve problems and exceed expectations. Our team is now growing, and we are on the lookout for talented individuals who always raise the bar by constantly challenging themselves and are experts in building customized solutions in the digital marketing space. What you will enjoy in this Role?So, are you someone who wants to work with cutting-edge technology and enable marketers to create data-driven, omnichannel consumer experiences through data platforms? Then you could be exactly who we are looking for. Apply today and be part of a creative, innovative, and talented team that’s not afraid to push boundaries or take risks. What will you do?We seek Software Engineers with experience building and scaling services in on-premises and cloud environments.As a Principal Software Engineer in the Epsilon Attribution/Forecasting Product Development team, you will design, implement, and optimize data processing solutions using Scala, Spark, and Hadoop. Collaborate with cross-functional teams to deploy big data solutions on our on-premises and cloud infrastructure along with building, scheduling and maintaining workflows. Perform data integration and transformation, troubleshoot issues, Document processes, communicate technical concepts clearly, and continuously enhance our attribution engine/forecasting engine.Strong written and verbal communication skills (in English) are required to facilitate work across multiple countries and time zones. Good understanding of Agile Methodologies – SCRUM.QualificationsStrong experience (9 - 15 years) in Python or Scala programming language and extensive experience with Apache Spark for Big Data processing for design, developing and maintaining scalable on-prem and cloud environments, especially on AWS and as needed with GCP cloud.Proficiency in performance tuning of Spark jobs, optimizing resource usage, shuffling, partitioning, and caching for maximum efficiency in Big Data environments.In-depth understanding of the Hadoop ecosystem, including HDFS, YARN, and MapReduce.Expertise in designing and implementing scalable, fault-tolerant data pipelines with end-to-end monitoring and alerting.Using Python to develop infrastructure modules. Hence, hands-on experience with Python.Solid grasp of database systems and SQLs for writing efficient SQL’s (RDBMS/Warehouse) to handle TBS of data.Familiarity with design patterns and best practices for efficient data modelling, partitioning strategies, and sharding for distributed systems and experience in building, scheduling and maintaining DAG workflows.End-to-end ownership with definition, development, and documentation of software’s objectives, business requirements, deliverables, and specifications in collaboration with stakeholders. Experience in working on GIT (or equivalent source control) and solid understanding of Unit and integration test frameworks. Must have the ability to collaborate with stakeholders/teams to understand requirements and develop a working solution and the ability to work within tight deadlines and effectively prioritize and execute tasks in a high-pressure environment. Must be able to mentor junior staff.Advantageous to have experience on below: Hands-on with Databricks for unified data analytics, including Databricks Notebooks, Delta Lake, and Catalogues.Proficiency in using the ELK (Elasticsearch, Logstash, Kibana) stack for real-time search, log analysis, and visualization.Strong background in analytics, including the ability to derive actionable insights from large datasets and support data-driven decision-making.Experience with data visualization tools like Tableau, Power BI, or Grafana.Familiarity with Docker for containerization and Kubernetes for orchestration.