Apply for this Job

UploadChoose a CV

To find out about how we process your data, please read our privacy policy.

Send

  • LocationNorthampton Square, Clerkenwell, London EC1V 0HB, UK
  • Salary£500 - £600 per day
  • Job TypeContract
  • ReferenceJHJSLEADDATA
  • Date posted5-Jun-19
Major Financial services organsisation urgently seeks an experienced Lead data/big data engineer with an exceptional Java background and the ability to work at both stategic planning AND hands on level in the data field with excellent Hadoop, Spark, Kafka, Hive, Sql etc and MUST have GCP/Google Cloud experience.

Responsible For:

As a key member of the technical team alongside Engineers, Data Scientists and Data Users, you will be expected to define and contribute at a high-level to many aspects of our collaborative Agile development process:

* Software design, java development, automated testing of new and existing components in an Agile, DevOps and dynamic environment
* Promoting development standards, code reviews, mentoring, knowledge sharing
* Product and feature design, scrum story writing
* Data Engineering and Management
* Product support & troubleshooting
* Implement the tools and processes, handling performance, scale, availability, accuracy and monitoring
* Liaison with BAs to ensure that requirements are correctly interpreted and implemented. Liaison with Testers to ensure that they understand how requirements have been implemented - so that they can be effectively tested.
* Participation in regular planning and status meetings. Input to the development process - through the involvement in Sprint reviews and retrospectives. Input into system architecture and design.
* Peer code reviews.
* 3rd line support.



Essential Experience:

* Experienced in Java, and/or Python, Unix/Linux environment on-premises and in the cloud
* MUST have solid GCP / Google Cloud experience with hands on GCP project experience.
* Java development and design using Java 1.7/1.8. Advanced understanding of core features of Java and when to use them
* Experience with most of the following technologies (Apache Hadoop, Apache Spark, Spark streaming, YARN, Kafka, Hive, HBase, Presto, Python, ETL frameworks, MapReduce, SQL, RESTful services).
* Sound knowledge of working on Unix/Linux Platform
* Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Spark, Spark SQL.
* Experience with time-series/analytics db's such as Elasticsearch
* Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA
* Exposure to Agile Project methodology but also with exposure to other methodologies (such as Kanban)
* Understanding of data modelling techniques using relational and non-relational techniques
* Coordination between Onsite and Offshore
* Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects;
* Nice to have: ELK experience.

Whist it is not essential to have ALL of the above skills and experience preference will be shown to those with more AND who have a banking background. Candidates who do NOT have solid GCP / Google Cloud Platform experience will not be considered.

Similar Jobs.

Apply for this Job

Let's Work Together

I’m looking for a job a candidate or would like to work at Vertex