Role Overview
Support a large enterprise healthcare organization with Hadoop-based data processing platforms
Hands-on development role focused on batch data processing, not reporting or BI
Build, enhance, and maintain Spark-based workflows used across downstream analytics and business systems
Work within a regulated, production enterprise environment
What You’ll Be Doing
Develop and enhance data processing solutions on Hadoop platforms
Build and support Spark-based batch workflows
Write and maintain Python and SQL code used in production systems
Review peer code and contribute to coding standards and best practices
Create and maintain technical documentation and specifications
Participate in unit testing, debugging, and release activities
Support applications through the full software development life cycle
Ensure data integrity, availability, and security requirements are met
Collaborate with developers, analysts, and project stakeholders
Required Skills and Experience
3+ years of professional experience in software development or data engineering
Hands-on experience working with Hadoop
Experience developing Spark batch jobs
Proficiency in Python
Strong SQL skills, including stored procedures
Experience supporting production data processing systems
Familiarity with SDLC, change management, and release processes
Ability to work independently and manage assigned tasks
Nice to Have
Experience in regulated or enterprise environments
Exposure to large-scale batch data platforms
Familiarity with healthcare or financial services data
Additional Details
Hybrid onsite role in Jacksonville, FL
No travel required
Standard business hours, 8 AM – 5 PM
