The client is a global software development company with strong emphasis in the automotive domain. The business supports several major OEMs and Tier 1 suppliers and is currently supporting vast developing autonomous drive projects.
The feature development team are looking to expand their skill set by adding a big data engineer to the team to extract new features from a data repository consisting of sensor and test vehicle data.
- Planning, creating and developing advanced cloud applications using modern big data technologies – Hadoop, and its ecosystem (Yarn, MapReduce, Spark, HBase, Kafka, etc.)
- Restructure of existing code
- Writing high-performance, reliable clean code in languages like Java, SCALA and Python
- Modelling complex, big data architectures
- Analysing vast data stores to uncover insights whilst adhering to GDPR
- Minimum 4 years’ experience in Big Data roles
- Expert knowledge with Hadoop and it’s ecosystems (e.g. Yarn, MapReduce, Spark, HBase, Kafka, etc.) in both development and implementation
- Passionate about writing high-performance, reliable clean code in SCALA, Java and Python. Writing of MapReduce jobs.
- Experience in loading from disparate data sets and pre-processing using Hive and Pig
- Knowledge of distributed processing principles/frameworks and modelling complex architectures
- Track record of working on perform analysis of vast data stores and uncover insights while maintaining the security and data privacy
- Knowledge and understanding of agile methodologies and mind-set
- Played a role in project coordination
- Good verbal and written communication skills
- Experience presenting to various stakeholders
- Opportunity to work for an exciting business, focussed on pioneering the future of autonomous drive
- Negotiable depending on experience
- Attractive holiday package
- Developer | Software Plus additional benefits