- Career Center Home
- Search Jobs
- Lead Software Engineer - Data Engineer
Results
Job Details
Explore Location
JPMorganChase
Bengaluru, Indiana, India
(on-site)
Posted
2 days ago
JPMorganChase
Bengaluru, Indiana, India
(on-site)
Job Type
Full-Time
Job Function
Banking
Lead Software Engineer - Data Engineer
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Lead Software Engineer - Data Engineer
The insights provided are generated by AI and may contain inaccuracies. Please independently verify any critical information before relying on it.
Description
We have an opportunity to impact your career and provide an adventure where you can push the limits of what's possible.As a Lead Software Engineer at JPMorganChase within the Marketing Automation Platforms Team, you are an integral part of an agile team that works to design, build, and lead development of scalable data platform and big data processing solutions. You will drive engineering excellence across batch/stream processing, ETL/ELT pipelines, data lake patterns (Medallion architecture) and analytics platforms including Snowflake, using Python/Java, Spark, and modern data storage formats such as Parquet and Avro.
Job responsibilities
- Lead architecture and engineering of large-scale data processing and platform solutions using Python, and Java.
- Build GenAI solutions using Retrieval-Augmented Generation (RAG) to provide grounded, explainable outputs over enterprise data.
- Design and integrate Knowledge Graphs to improve entity resolution, contextual retrieval, and relationship-based reasoning
- Develop Agentic AI workflows to automate tasks using tool orchestration, planning patterns, guardrails, and monitoring.
- Design and implement robust ETL/ELT pipelines, including ingestion, transformation, validation, reconciliation, and publishing across curated layers.
- Build and operationalize Medallion architecture patterns for data quality, lineage, governance, and reuse.
- Develop and optimize solutions on Data Lakes partitioning strategies.
- Ensure engineering best practices: code quality, testing, CI/CD, observability, security-by-design, and operational readiness.
- Drive performance optimization across Spark jobs (shuffle tuning, joins, caching, skew handling), storage layout, and Snowflake workloads.
- Partner with product owners, architects, data governance, and downstream consumers to translate requirements into resilient technical solutions.
- Mentor and guide engineers; set standards for design reviews, code reviews, incident RCA, and ongoing platform modernization.
Required qualifications, capabilities, and skills
- Formal training or certification on software engineering concepts and 5+ years of software engineering experience with at least several years in data engineering / big data platforms.
- Strong hands-on development skills in Python and/or Java (ideally both).
- Strong knowledge and hands-on experience with GenAI solution development.
- Strong knowledge of RAG architectures including ingestion, chunking, embeddings, retrieval, grounding, and evaluation.
- Strong knowledge of Knowledge Graph modeling concepts and practical integration into retrieval and analytics workflows.
- Strong knowledge of Agentic AI patterns including tool use, workflow orchestration, guardrails, and quality monitoring.
- Strong experience with Apache Spark and distributed data processing concepts.
- Proven expertise building ETL/ELT pipelines and data integration frameworks.
- Strong understanding of data storage/serialization and table/file formats, including Parquet and Avro.
- Deep understanding of Big Data ecosystem fundamentals (distributed compute, fault tolerance, partitioning, data quality, metadata management).
- Strong experience implementing Medallion architecture and Data Lake design principles.
- Strong working knowledge of Snowflake including loading/unloading patterns and performance considerations.
- Ability to lead technical decisions, drive alignment across teams, and communicate clearly with technical and non-technical stakeholders.
Preferred qualifications, capabilities, and skills
- Experience with data orchestration frameworks and pipeline automation.
- Experience with data governance concepts (lineage, cataloging, access controls, PII handling) and production operations.
- Exposure to streaming/event-driven patterns and incremental processing strategies.
- Experience designing reusable data products, frameworks, or platform components used by multiple teams.
- Domain experience in highly regulated environments (risk, audit, compliance, privacy).
- Experience with lakehouse patterns, table formats (e.g., ACID table layers), and data platform modernization programs.
- Experience with cost optimization and FinOps-style controls for big data workloads.
Job ID: 84185097
Please refer to the company's website or job descriptions to learn more about them.
View Full Profile
More Jobs from JPMorganChase
Software Engineer III - Java/Automation Engineer
Bournemouth, United Kingdom
2 days ago
Vice President, Auto Lease Residual Value Risk - Chase Auto Finance
Plano, Texas, United States
2 days ago
Experienced Software Engineer Java / Python (Full Stack or Back End)
Wilmington, Delaware, United States
2 days ago
Jobs You May Like
Community Intel Unavailable
Details for Bengaluru, Indiana, India are unavailable at this time.
Loading...
