Company is seeking a Senior Amazon Redshift Engineer to lead the development on the Company Data Cloud. The candidate will demonstrate a successful track record of engineering Cloud-based Big Data platforms in support of BI & Analytics initiatives.
The Senior Amazon Redshift Engineer will be comfortable working across the full spectrum of data engineering disciplines spanning database architecture, database design, and analytic solution implementation. The Senior Amazon Redshift Engineer will work as part of an interdisciplinary Agile team closely aligned with Company business units to develop and deploy descriptive, predictive and prescriptive analytical solutions. The Senior Amazon Redshift Engineer will work closely with peers and management to strengthen analytic competency across the enterprise
The ideal candidate will be distinguished not only by his or her mastery of data engineering in a cloud-based, multi-tenant environment, but also by high levels of creativity and tenacity
Key Responsibilities include
- Lead the development function on the Company Data Cloud, engineering the Company Data Cloud platform for scalability, performance, optimal cost and a high degree of integration with other cloud and ground based data and application platforms
- Mentor DB engineers and serve as the organization’s expert on the Company Data Cloud platform technologies
- Build scalable databases in a Lambda environment for the consumption of structured and unstructured data using batch, micro-batch, and streaming data acquisition
- Ability to parse semi-structured data into structured using tools, regular expressions or scripting
- Develop ETL/ELT and data integration routines
- Mentoring of Cloud DB engineers
Qualifications
Qualifications & Requirements
- Bachelor’s Degree in Computer Science, Math, Statistics or other quantitative discipline
- Very strong knowledge on AWS data platforms, primarily Redshift but also including DynamoDB, Data Pipeline and EMR
- Very strong knowledge of Data warehousing concepts
- Very strong knowledge of Data warehousing best practices for optimal performance in an MPP environment
- Strong knowledge of Chef Infrastructure/Application Automation and CloudFormation
- Strong MySQL experience
- Working knowledge of at least one ETL tool (talend, Pentaho, SSIS) and open source data movement tools including Sqoop
- Proficient with continuous deployment methodology, DevOps and Agile
- 5+ years experience with developing against NoSQL data stores
- 5+ years of programming in at least one language (Python, Java, C/C++)
- 3+ years of experience working in a Data Warehousing environment
- 3+ years of experience developing ETL
- 5+ years of Linux/UNIX experience
Preferred Skills and Experience
- Experience working in Education, Publishing or Media companies
- Experience with Apache Spark
- Experience with real time streaming AWS Kinesis or Apache Kafka/Storm