We are seeking for Middle Data Engineer to join our team. In this role, you will be responsible for processing and managing data assets by setting up automation and manually processing one-off inbound requests. You will work with Snowflake, AWS suite, and other ETL tools.
2+ years of experience with Python
Proficient in SQL and familiar with Git and SDLC practices
Basic experience with AWS
Excellent analytical and problem-solving skills
Ability to work independently and in a team environment
At least an Intermediate strong level of English
Experience with Snowflake
Familiarity with DBT
Process and manage data assets by setting up automation and manually processing one-off inbound requests
Organize, structure, and schedule jobs to run concurrently or sequentially as needed
Use automated jobs in Snowflake (e.g., Snowpipes), “out of the box” ETL tools (e.g., DBT), and work inside AWS’s suite to ensure we are using the correct tool for the use-case
Collaborate with cross-functional teams to identify opportunities for process improvement and automation
Develop and maintain documentation for data processing and automation workflows
Ensure the quality, accuracy, and reliability of all processed data assets
Stay up-to-date with emerging data processing technologies and tools
The product creates a membership-based Data Co-Op ecosystem that offers reliable and high-quality data and services for digital and SaaS products. The Co-Op ecosystem provides access to leading data streams, ML models, and data strategy and is built on top of Snowflake and Molecula's platforms. By utilizing a "consumption validation model", the platform ensures that the data provided by its members is of high quality. The Co-Op ecosystem aims to provide fixed-price access to uninterrupted data and services without relying on traditional 3rd party suppliers. With over 2.8 trillion rows of data managed in 2021 and 20+ top data strategy experts, the platform is poised to help its members innovate and outperform their competitors in their respective markets.
1) Call with Recruiter
2) Technical interview with Senior Data Engineer
3) Client interview