Role: Big Data Engineer (Spark and AWS)
Location: Mclean or Richmond VA (Hybrid )
Must Have Skills: AWS, spark, pyspark, SQL and Python
Experience with Apache Spark / Scala, Spark SQL, and related Spark ecosystem tools and libraries.
Knowledge of big data technologies such as Hadoop, HDFS, HBASE and distributed computing frameworks for large-scale data processing.
Hands-on Linux scripting experience.
Excellent communication and collaboration skills, with the ability to work effectively in a cross-functional team environment.
Knowledge or experience in the use of GIT/BitBucket, Gradle, Jenkins, Jira, Confluence or a similar tool(s) for building Continuous Integration/Continuous Delivery (CI/CD) pipelines.
Job Type: Full-time
Pay: $60.00 - $65.00 per hour
Experience:
- Informatica: 1 year (Preferred)
- SQL: 1 year (Preferred)
- Data warehouse: 1 year (Preferred)
Work Location: Remote