Big Data Engineer

Newbury, Berkshire
  1. Full Time
  2. IT/Technology
Posting date:20 Aug, 2019
Job title: Big Data Engineer
Location: Newbury

Your Role

The Big Data Engineer provides expert guidance and delivers through self and others to source and integrate structured and unstructured data from dozens of local data sources including streaming data feeds into a data lake. You will implement solutions to wrangle, cleanse, validate and govern the petabyte scale data lake, and build machine learning applications that make use of large volumes of data to generate outputs and commercial actions that deliver incremental revenue or reduce cost.

Key accountabilities
  • Designing and producing high performing and stable applications to perform complex processing of massive (petabyte scale) volumes of data in a Hadoop based environment.

  • Working alongside Data Scientist to build end-to-end applications that make use of large volumes of source data from the operational systems and output insights back to business systems 

  • Building real-time data processing applications which are integrated with business systems to enable value from analytic models to drive rapid decision making

  • Sourcing, ingesting, wrangling and validating data sets, building pipelines to transform data and produce analytical records for machine learning

  • Managing complex stakeholder relationships

Languages and Data Tools

  • Hadoop ecosystem (Spark, Hive/Impala, HBase, Yarn)
  • Scala and Python
  • Unix-based systems, including bash programming

Other distributed technologies such as Cassandra, Solr/ElasticSearch, Flink, Flume would also be desirable

What are we looking for?

We need someone with expert level experience in designing, building and managing applications to process large amounts of data in a Hadoop / Spark ecosystem. Extensive experience with performance tuning applications on Hadoop and configuring Hadoop based systems to maximise performance. Ideally, also knowledge of building systems to perform real-time data processing using Spark Streaming and Kafka, or similar technologies. It would be great if you have dealth with common SDLC, including SCM, build tools, unit testing, TDD/BDD, continuous delivery and agile practises and also working in large-scale multitenancy Hadoop environments both on-premise and cloud.


What’s in it for you?
Great reward with a competitive salary. We will also throw in 28 days’ holiday and a flexible work environment. You will get to work with a fun, diverse and driven team who love what they do and a leadership team who listen, support and inspire you to be your very best!

If all of the above sounds good - apply now!
Our Commitment

Vodafone is committed to attracting, developing and retaining the very best people by offering a flexible, motivating and inclusive workplace in which talent is truly recognised, developed and rewarded. We believe that diversity plays an important role in the success of our business and we are committed to creating an inclusive work environment, which respects, values, celebrates and makes the most of people’s individual differences - we are not only multinational but multicultural too.


Vodafone is committed to making reasonable adjustments for all candidates considered to have a disability during the recruitment assessment process. Should you meet the minimum criteria for the role you are applying to, we will contact you to discuss the reasonable adjustments that you may need.


Life at Vodafone

A fantastic company to work for with great benefits

Retail Trainers career goes from strength to strength

Just some of our recent awards