Job DescriptionThe incumbent is responsible for the definition, development, and implementation of new systems, and major enhancements to existing systems, as well as production support for systems with high complexity. The incumbent is capable of providing project leadership for major feasibility or business systems analysis studies.
Required Qualifications:
- Bachelor’s Degree or additional years of experience
- Must have 5-7+ years of experience
- Experience with data engineering tools and technologies, such as Apache Spark, Hadoop, and Hive
- Experience with cloud computing platforms, such as Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP)
- Experience with Hadoop and Hive.
- Experience with Spark and PySpark.
- Familiarity with cloud computing concepts, such as virtual machines, storage, and networking
- Experience with cloud-based data engineering tools and services
- Experience with Google Cloud Platform (GCP)
- Familiarity with GCP services, such as Google Cloud Storage, Cloud Functions, Pub/Sub, Cloud Scheduler, Cloud Run, BigQuery, and Cloud SQL
- Experience with GCP APIs and SDKs
- Familiarity with data modeling and database design
- Strong SQL skills
- Strong problem-solving and analytical skills
- Excellent communication and teamwork skills
Top 3 Required Skills:
- Knowledge of the GCP environment
- Terraform development
- Database modeling
Preferred Qualifications:
- Experience writing Python scripts for data engineering tasks
- Familiarity with Python libraries for data manipulation and analysis, such as NumPy, Pandas, and SciPy
- Healthcare or insurance background preferred
- Experience Hadoop and Hive
Job Type: Contract
Pay: $45.00 - $55.00 per hour
Schedule:
Education:
Experience:
- Data modeling: 5 years (Required)
- Terraform: 5 years (Required)
- Apache Spark: 4 years (Required)
Work Location: Remote