Posted on 10/30/2024

Junior Data Engineer

Experience: 4 Years, Salary: Not disclosed, Job Code (JPC - 1001), Location: Bedford, Nashua, USA,
Data Engineer

Job highlights:

  • Extensive experience in Data lake architecture, building data pipelines using AWS services like EC2,Lambda, Redshift, Glue, Athena, EMR, Cloudwatch, SMust be proficient in Python and SQL
  • Masters degree (Applied Mathematics, Management Science, Data Science, Statistics, Econometrics or Engineering)
Project Role

Junior Data Engineer

  • Data Engineer will support building data pipelines onto Data Lake combining several data sets to enable the team to perform univariate and multivariate analysis for Sales, claims and Payer data for Commercial, Sales Operations and Market Access teams.
  • He / she will be responsible to build data pipelines on AWS using native tools and technologies to synthesize key insights from analysis to provide strategic recommendations and implications, as appropriate.

Responsibilities include:

  • Build and Enhance data ingestion pipelines – including performance Troubleshooting & Remediation of reported errors at different tiers.
  • Communication with external data providers in case of source data issues for resolution.

Required Skills:

  • Overall knowledge of AWS data stack and DevOps tools, Gitlab, and Kubernetes.
  • Knowledge of ETL tools like AWS Glue, Talend, DBT, Python, SQL etc.
  • Knowledge of SFDC, RedShift, AWS S3, AppFlow, SFTP, GitHub.
  • Sound knowledge of life sciences commercial data operations.
  • Deployment expertise on cloud platforms like AWS.
  • CI/CD, GitHub/GitLab, DevOps skills like Jenkins for pipelines.

Roles & Responsibilities:

  • Master’s degree (Applied Mathematics, Management Science, Data Science, Statistics, Econometrics or Engineering).
  • 4+ years of relevant post-collegiate job experience.
  • Extensive experience in Data lake architecture, building data pipelines using AWS services like EC2, Lambda, Redshift, Glue, Athena, EMR, Cloudwatch, and S3.
  • Must be proficient in Python and SQL.
  • Experience working on Pharma commercial syndicated data such as IQVIA, Symphony, Claims, Payer, CRM and MDM data.
  • High motivation, good work ethic, maturity and personal initiative.
  • Strong oral and written communication skills.

At Fourhub, we are dedicated to providing equal employment opportunities.

Share your resume with us, and we’ll be in touch as soon as we find a role that’s a great fit for your skills and experience.