Looking for a Software Engineer with knowledge in Hadoop and Spark. Experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams).
Responsibilities:
Translate functional and technical requirements into detailed specifications running on AWS using services EC2, ECS, RDS Aurora MySQL, SQS, SNS, KMS, Athena.
Migrate current Prometheus and DARQ applications to manage multi-account AWS cloud environment containers, scalable computing platforms Docker (ECS), Kubernetes.
Develop Spark code using Scala and Spark-SQL/Streaming for faster testing and processing of data.
Create, optimize, and troubleshoot complex SQL queries to retrieve and analyze data from databases such as Redshift, Oracle, MS SQL Server, MySQL, and PostgreSQL.
Design ETL transformations and jobs using Pentaho Kettle Spoon designer 5.7.12 and Pentaho Data Integration Designer and schedule them on ETL WFE application Carte Server.
Design, code, test, and customize RHQ reports for Market systems data and provide data quality solutions to meet client requirements.
Develop various complex queries for different data sources Nasdaq Data Warehouse, Revenue Management System and work on performance tuning of queries.
Create scripts for automation processes for data ingestion.
Build and deploy artifacts (RPMs) and services using GitLab pipelines to Dev, QC, and Prod AWS accounts in cloud platform.
Validate DARQ data, reports and manage document libraries on collaboration site using Confluence.
Requirements:
The minimum education requirements to perform the above job duties are a Bachelor’s degree in Computer Science, Information Technology, or a related technical field.
#J-18808-Ljbffr