Log in     
Senior Research Engineer - Big Data (Hadoop, Flink, Spark, Could Compu Posted Mar 13
GIOS Technology Limited , München, Bayern, Germany
  • This employer requests that only candidates in Germany apply to this job.

    You appear to be located in United States, not Germany, so you will not be able to apply for this job.

Key requirement: Good understanding about developing cloud computing and innovation on Distributed system, experienced in Hadoop framwork (eg Flink Spark, Storm, Kafka etc.)

Client & Opportunity Overview

Our Client's European Research Center (ERC) in Munich is responsible for applied research, advanced technical innovation, architecture evolution design and strategic technical planning. The IT R&D and Big Data group in ERC is looking for Research Engineers and Big Data Experts. The positions are unique in the European hi-tech ecosystem as they offer an opportunity for performing research, architecture design and development in the Big Data distributed systems domain in a corporate setup, while creating a new and agile team.

Key responsibility

Design and develop cutting-edge Big Data technologies

Process and analyze large and/or live data sets, while dealing with the 5V's challenges Architect new Big Data -based solutions for pain points of specific markets

Research and patent new innovation in the Big Data and ICT space

Screen and evaluate tools, frameworks and libraries for the implementation of Big Data algorithms

Work with large-scale infrastructures and distributed solutions

Interact closely with cloud and big data architects on identifying limitations of existing solutions and propose solutions and concepts to address them

Interact with Account teams and with Our Client's EU ecosystem for driving Joint Innovation Enter in dialog with open source community and with key EU institutions

Contribute to Our Client Big Data research agenda


Minimum qualifications:

MS or PhD in computer science, electrical engineering or other domain-related discipline

Academical/Industrial experience in working with distributed systems

Hands-on experience in working with Hadoop-related frameworks (MR, HDFS, Flink, Spark, Storm,Kafka,HBase,Tachyon, )

Basic understanding of streaming and DB

Basic understanding of cloud computing paradigm and supercomputers

Comfortable communicating in English

Excellent interpersonal skills and team-work spirit and independent working capability

Hands-on and can-do attitude

Prefered qualifications:

Stream, Real Time and in-memory processing at large scale

Open source experience and/or contributions

Research experience with good publication record and/or patents in (any of) Algorithms for Big Data, Architecture, Distributed Systems, Parallel Databases, Networking, or Systems.

Experience with distributed computation, in-memory processing, cluster management

Employment Type: Permanent