Big Data Developer -Hadoop 39 views

Responsibilities . Partner with data analyst, product owners and data scientists, to better understand requirements, solution designs, finding bottlenecks, resolutions, etc. . Support/Enhance data pipelines and ETL using heterogeneous sources . Transform data using data mapping and data processing capabilities like Kafka, Spark, Spark SQL, HiveQL etc. . Expands and grows data platform capabilities to solve new data problems and challenges . Ability to dynamically adapt to conventional big-data frameworks and tools with the use-cases required by the project . Work with major cloud (AWS/Azure/GCP) Hadoop clusters Requirements . 3 to 5 years of experience in Analytical Projects involving Data Lake, Data Warehouse, Big Data Solutions, Cloud BI solutions at major systems integrator . 3 + years of experience with the Hadoop ecosystem and Big Data technologies . Knowledge of design strategies for developing scalable, resilient, always-on data lake . Hands-on experience with the Hadoop eco-system – HDFS, MapReduce, HBase, Hive, Impala, Spark, Kafka . Experience in implementing Hadoop Data Lakes – Data storage, partitioning, splitting, file types (Parquet, Avro, ORC) for specific use cases etc. . Experience with one of the Query languages – SQL, Hive, Impala, Drill etc. . Exposure to one of NoSQL databases – HBase, MongoDB, Cassandra etc. . Experience in agile(scrum) development methodology . Exposure to Data ingestion frameworks such as Kafka, Sqoop, Storm, Nifi, Spring Cloud, etc. . Experience with development/automation skills. Must be very comfortable with reading and writing Scala, Python or Java code Desired . Experience with one of the Hadoop open source distributions – Apache, MapR and Cloudera . Major cloud (AWS/Azure/GCP) Hadoop cluster experience

More Information

Only candidates can apply for this job.
Share this job
Company Information

Contact Us

https://jobselevate.com/wp-content/themes/noo-jobmonster/framework/functions/noo-captcha.php?code=56b6a