Client: Our client has been a pioneer in US school education since 2000, it is leading the way in next-generation curriculum and formative assessment, they develop several solutions and interactive products for teachers and students. Their products target the educational needs of all classes – from elementary to high school. They operate in 50 states, products are used in 21,000+ schools by 10,000,000 students. The company is technology-driven with many software engineers involved in product development.
- Position overview: As a Data Engineer, you’ll be a crucial member of our data team, responsible for building and maintaining the infrastructure that empowers teams across the organization and enables our clients to leverage data effectively. You’ll foster a culture of knowledge sharing, where teams use data tools to improve their workflows and enhance client experiences. In this role, you'll focus on developing data systems and driving innovation, ensuring that teams can use these tools to create meaningful insights and improvements.
- The estimated salary range for this position is $96,000 to $132,000.
- Technology stack: ETL: Matillion, DBT
-
Snowflake
-
Cube.dev
- Responsibilities: Help teams build engaging apps by leveraging millions of data points to enhance the student experience
-
Develop reusable data pipelines that give teachers better insights into their students’ progress and needs
-
Build, automate, and maintain a scalable, secure data platform, and collaborate with data science and analyst teams to ensure efficient data access
-
Implement and manage data storage solutions (e.g., data lakes, databases), optimize workflows, ensure data privacy, and stay compliant with regulations
-
Use tools like Snowflake, Airflow, dbt, SQL, Python, Terraform, Looker, and Datadog to analyze and improve system performance and resolve issues
-
Engage in agile practices, contribute to cross-team knowledge sharing, and become an expert in data models and standards within the education industry
-
Build well-tested and optimized ETL pipelines for full and delta data extraction
-
Collaborate with analysts and learning scientists to design and implement ETL and Data Warehousing solutions
-
Contribute to open-source projects and industry data standards like Caliper Analytics or xAPI
-
Improve automation processes for deployment and testing of data platform services
- 2+ years of professional experience in software development, site reliability, or data engineering
-
Strong computer science and data engineering fundamentals
-
Proven proficiency in SQL and Python (or another development language)
-
Understanding of ETL/ELT pipelines and data warehousing design and tools
-
Experience with a modern data stack (Airflow, Snowflake, dbt, Cube.dev)
-
Experience with data formats (JSON, CSV, XML) and storage techniques (3NF, Star Schema, Data Lake)
-
Experience working with Infrastructure-as-Code frameworks (Terraform)
-
Strong communication skills, both written and verbal
- Nice to have: Experience with tools like Snowflake, AWS (S3, RDS, DynamoDB), Airflow, dbt, Fivetran, Matillion, and Looker
-
Familiarity with AWS services like Kinesis, Lambda, and API Gateway
-
Passion for mentoring and sharing knowledge with others
-
Open-source contributions or personal projects demonstrating a passion for learning and building
-
Experience in education or ed-tech