Hadoop PySpark Data Pipeline Build Engineer

Job Details

  • ID#49397981
  • Address 19130 , Philadelphia,


    Philadelphia USA
  • Job type


  • Salary USD $130,000 - $172,500 hourly 130000 - 172500 hourly
  • Hiring Company

    Kforce Technology Staffing

  • Showed04th March 2023
  • Date03rd March 20232023-03-03T00:00:00-0800
  • Deadline02nd May 2023
  • Category

    Et cetera

Hadoop PySpark Data Pipeline Build Engineer

Vacancy expired!

RESPONSIBILITIES:Kforce has a client that is seeking a Hadoop PySpark Data Pipeline Build Engineer in Philadelphia, PA.Duties Include: Hadoop PySpark Data Pipeline Build Engineer will lead complex technology initiatives including those that are companywide with broad impact Act as a key participant in developing standards and companywide best practices for engineering complex and large-scale technology solutions for technology engineering disciplines Design, code, test, debug, and document for projects and programs Review and analyze complex, large-scale technology solutions for tactical and strategic business objectives, enterprise technological environment, and technical challenges that require in-depth evaluation of multiple factors, including intangibles or unprecedented technical factors Make decisions in developing standard and companywide best practices for engineering and technology solutions requiring understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives As a Hadoop PySpark Data Pipeline Build Engineer, you will collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals Lead projects, teams, or serve as a peer mentorREQUIREMENTS: 5+ years of Big Data Platform (data lake) and data warehouse engineering experience demonstrated through prior work experiences Hands-on experience with developing services modern data pipelines, including movement, collection, integration, transformation of structured/unstructured data with built-in automated data controls, and built-in logging/monitoring/alerting, and pipeline orchestration managed to operational SLAs Hands-on experience developing big data solutions leveraging the spectrum of Hadoop Platform compatible features such as Atlas, Spark, Flink, Kafka, Sqoop, Cloudera Manager, Airflow, Impala, Hive, HBase, Tez, Hue, and a variety of source data connectors Experience automating DQ validation in the data pipeline Experience implementing automated data change management including code and schema, versioning, QA, CI/CD, rollback processingThe pay range is the lowest to highest compensation we reasonably in good faith believe we would pay at posting for this role. We may ultimately pay more or less than this range. Employee pay is based on factors like relevant education, qualifications, certifications, experience, skills, seniority, location, performance, union contract and business needs. This range may be modified in the future.We offer comprehensive benefits including medical/dental/vision insurance, HSA, FSA, 401(k), and life, disability & ADD insurance to eligible employees. Salaried personnel receive paid time off. Hourly employees are not eligible for paid time off unless required by law. Hourly employees on a Service Contract Act project are eligible for paid sick leave.Note: Pay is not considered compensation until it is earned, vested and determinable. The amount and availability of any compensation remains in Kforce's sole discretion unless and until paid and may be modified in its discretion consistent with the law.This job is not eligible for bonuses, incentives or commissions.Kforce is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.

Vacancy expired!