Nike has embraced big data technologies to enable data-driven decisions. We’re looking to expand our Data Engineering team to keep pace. As a Data Engineer, you will work with a variety of talented Nike teammates and be a driving force for building first-class solutions for Nike Technology and its business partners, working on development projects related to supply chain, commerce, consumer behavior and web analytics among others.

Role responsibilities:

Design and implement data products and features in collaboration with product owners, data analysts, and business partners using Agile / Scrum methodology

Contribute to overall architecture, frameworks and patterns for processing and storing large data volumes

Research, evaluate and utilize new technologies/tools/frameworks centered around high-volume data processing

Translate product backlog items into engineering designs and logical units of work

Profile and analyze data for the purpose of designing scalable solutions

Define and apply appropriate data acquisition and consumption strategies for given technical scenarios

Design and implement distributed data processing pipelines using tools and languages prevalent in the big data ecosystem

Build utilities, user defined functions, libraries, and frameworks to better enable data flow patterns

Implement complex automated routines using workflow orchestration tools

Work with architecture, engineering leads and other teams to ensure quality solutions are implemented, and engineering best practices are defined and adhered to

Anticipate, identify and solve issues concerning data management to improve data quality

Build and incorporate automated unit tests and participate in integration testing efforts

Utilize and advance continuous integration and deployment frameworks

Troubleshoot data issues and perform root cause analysis

Work across teams to resolve operational & performance issues

The following qualifications and technical skills will position you well for this role:

MS/BS in Computer Science, or related technical discipline

1+ years of experience in large-scale software development, 2+ years of big data experience

Strong programming experience, Python or Scala preferred

Experience designing, estimating and executing for complex software projects

Extensive experience with Hadoop and related processing frameworks such as Spark, Hive, Storm, etc.

Experience with RDBMS systems, SQL and SQL Analytical functions

Experience with workflow orchestration tools like Apache Airflow

Experience with source code control tools like Github or Bitbucket

Experience with performance and scalability tuning

Ability to influence and communicate effectively, both verbally and written, with team members and business stakeholders

Interest in and ability to quickly pick up new languages, technologies, and frameworks

Experience in Agile/Scrum application development

The following skills and experience are also relevant to our overall environment, and nice to have:

Experience with Java

Experience working in a public cloud environment, particularly AWS

Experience with cloud warehouse tools like Snowflake

Experience with messaging/streaming/complex event processing tooling and frameworks such as Kinesis, Kafka, Spark Streaming, Flink, Nifi, etc.

Experience working with NoSQL data stores such as HBase, DynamoDB, etc.

Experience building RESTful API’s to enable data consumption

Experience with build tools such as Terraform or CloudFormation and automation tools such as Jenkins or Circle CI

Experience with practices like Continuous Development, Continuous Integration and Automated Testing

These are the characteristics that we strive for in our own work. We would love to hear from candidates who embody the same:

Desire to work collaboratively with your teammates to come up with the best solution to a problem

Demonstrated experience and ability to deliver results on multiple projects in a fast-paced, agile environment

Excellent problem-solving and interpersonal communication skills

查看更多活动方案相关内容,请点击 心得体会