Hadoop jobs/walkins/openings

- Last Update On 24-January-2017

We have many openings for you in various IT companies if you have knowledge of Apache Hadoop. So find Hadoop jobs in the top tier cities of Noida, Bangalore, Mumbai, Pune, Kolkata and many more. The companies listed with Peel jobs are hiring for various skills based on managing and maintaining big data across various positions including Hadoop developer and Hadoop administrator. If you do not have much experience as a Hadoop developer, these jobs will give you a chance to learn Hadoop and enhance your skills. So submit your application to these companies and give a strong foundation to your career.

1 - 20 of 40

    • Knowledge of HDFS and MapReduce framework
    • Strong in MapReduce programs
    • Experience in data loading techniques using Sqoop and Flume
    • Understanding of Hadoop 2.x Architecture

    Interested candidates mail your resume to Suvith.dadige@valuelabs.com

    • Proficient understanding of distributed computing principles
    • Management of Hadoop cluster, with all included services
    • Ability to solve any ongoing issues with operating the cluster
    • Proficiency with Hadoop v2, MapReduce, HDFS, Spark
    • Knowledge of various ETL techniques and frameworks, such as Flume, Sqoop etc
    • Experience with various messaging systems, such as Kafka or RabbitMQ
    • Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
    • Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
    • Experience with NoSQL databases, such as HBase, Cassandra,
    • Good understanding of Lambda Architecture, along with its advantages and drawbacks
    • Experience with one or more frameworks such as Cloudera/MapR/Hortonworks
    •  Strong knowledge of Big Data querying tools, such as Pig, Hive, and Impala
    • Experience with integration of data from multiple data sources

    If interested, Please send your profile to msarana@evoketechnologies.com .

  • 1.Java Exp (Core Java + J2EE + Spring) Min 5 yrs
    2.RDBMS Exp (Oracle / MySQL / MSSQL)
    3.Hadoop Exp 1.5 to 3 yrs
    4.Very good proficiency in Spark 1.6
    a. Spark Core RDD Concepts + Broadcast Variables + Accumulators + Transformations + Actions
    b. Spark SQL
    c. Spark Streaming
    d. Able to write Spark Jobs in Java 8 / Scala / Python
    5.Very good proficiency in Hadoop 2.4
    a. HDFS Read / Write Operations + WebHDFS (REST Interface)
    b.HIVE Tables
    c. Able to write Java Map-Reduce Jobs
    d. YARN Concepts
    6.Worked on Atleast ONE of d foll Big Data Appliances
    a. Cloudera CDH 5.7.1
    b. Horton Works Data Platform (HDP)
    c. MapR

    Good to Have Skillset :-
    1. Worked on Kafka (Distributed Messaging)
    2. Worked on Cassandra (Distributed Storage)
    3. Spark + Kafka Integration
    4. Spark + Cassandra Integration
    5. Knowledge of Impala Query Engine
    6. Knowledge of Tableau / Qlikview

    Note: We are looking for immediate joiners only.

    Python Hadoop Java
  • Proficient understanding of distributed computing principles
    Management of Hadoop cluster, with all included services
    Ability to solve any ongoing issues with operating the cluster
    Proficiency with Hadoop v2, MapReduce, HDFS, Spark
    Experience with building stream-processing systems, using solutions such as Storm or Spark-Streaming
    Good knowledge of Big Data querying tools, such as Pig, Hive, and Impala
    Experience with integration of data from multiple data sources
    Experience with NoSQL databases, such as HBase, Cassandra, MongoDB
    Knowledge of various ETL techniques and frameworks, such as Flume, Sqoop etc
    Experience with various messaging systems, such as Kafka or RabbitMQ
    Experience with Big Data ML toolkits, such as Mahout, SparkML, or H2O
    Good understanding of Lambda Architecture, along with its advantages and drawbacks
    Experience with one or more frameworks such as Cloudera/MapR/Hortonworks

    If interested, Please send your profile to msarana@evoketechnologies.com .

    • Develop highly scalable classifiers and tools by influencing machine learning, data regression, and rule-based models
    • Adapt standard machine learning methods to best exploit modern parallel environments
    • Create language models from petabytes of text data in different languages
    • Suggest, collect and synthesize requirements and create effective feature roadmaps
    • Work as part of analytics teams to implement algorithms that power user and developer-facing products
    • Be accountable for measuring and optimizing the quality of your algorithms


    • BE/ B. Tech degree in Computer Science or related quantitative field with 2-4 years of relevant experience
    • Experience with scripting languages such as Perl, Python, PHP and shell scripts
    • Experience with recommendation systems, targeting systems, ranking systems or similar systems
    • Strong background in one or more of Large scale Data Mining, Machine Learning, Artificial Intelligence, Pattern Recognition or Natural Language Programming
    • Background in text understanding
    • Experience with Hadoop/Hbase/Pig or Mapreduce/Bigtable/AzureML/ Scala/ Hive/ H2O is a plus

    Note: To apply, send your resume to  careers@tredence.com.

    PHP Python perl Hadoop
    • Set up production Hadoop clusters with optimum configurations
    • Drive automation of Hadoop deployments, cluster expansion and maintenance operations
    • Manage Hadoop cluster, monitoring alerts and notification
    • Deployment of upgrades, updates and patches
    • Provide 24x7 tier-3 troubleshooting and break-fix support for production services
    • Diagnosis of installation & configuration issues
    • Diagnosis of cluster management issues
    • Diagnosis of performance issues
    • Job scheduling, monitoring, debugging and troubleshooting
    • Monitoring and management of the cluster in all respects, notably availability, performance and security
    • File system management and monitoring
    • Set up High Availability/Disaster Recovery environment
    • Debug/Troubleshoot environment failures/downtime
    • Ingesting additional data sources into Hadoop in either streaming or batch mode
    • Commissioning and decommissioning worker nodes
    • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig and MapReduce access for the new users
    • Set up security configurations in Hadoop
    • Data transfer between Hadoop and other data stores (incl. relational database)
    • Configuring the cluster to be rack-aware
    • Performance tuning of Hadoop clusters and Hadoop Map Reduce routines
    • Manage and review Hadoop log files

    Note: Apply follow the link http://www.impetus.com/content/join-us

    • Hadoop and Linux/UNIX administration and troubleshooting skills
    • Hadoop configuration
    • Understanding of Hadoop logging
    • Performance Tuning/HIVE/Pig/Map Reduce/Spark/Storm
    • Ambari knowledge if possible
    • Experience of Automation
    • Strong knowledge of Linux systems (RHEL/CentOS/OEL 5.x) 
    • Capacity Planning/Disk Space Management
    • Software Installation/Software Patches
    • Some JAVA software development skills are a plus (Spring)
    • Strong communication skills
    • Fluent in at least one scripting language (Shell/Perl/Python/Java/etc.) 
    • Solid Understanding on NameNode HA architecture

    Interested candidates can send resume to arani@charterglobal.com

    Json Unix Xml Hadoop spark
    • MCA,MCS,BE,M.Sc.(Comp / IT), with No Percentage Criteria.
    • BE other than computer and IT can also apply.
    • Should be available to join us immediately
    • Should be flexible to work on any Skills/ Technologies
    • Ready to work in long working hours
    • Must possess excellent analytical and logical skills.
    • Candidates with knowledge of above technologies shall be given preference
    • Project Details
    • Currently Agrobytes is working hard to make Digital India dream come true. Agrobytes is working in following sectors -
    • 1. Agriculture
    • 2. Education
    • 3. Health
    • 4. Real estate
    • 5. Banking Projects
    • 6. Insurance Projects
    • 7. International Projects
    • 8. Government projects
  • Responsibilities

    • Working domain knowledge in Microsoft cloud technologies, specifically Azure
    • Working knowledge on application of end-to-end architecture strategies, standards, processes, and tools in their solution designs
    • Ensures compliance with all architecture directions and standards through cross-organization consulting and direct involvement in development efforts.
    • Support the definition and selection of enterprise tools, technologies and processes
    • Act as an go to person in application design and development
    • Development knowledge of high-quality products that meet customer requirements and are consistent with enterprise architectural standards
    • Knowledge on Increased re-use and reduced redundancy in applications and technology designs
    • Supports the development of strategy, frameworks, best practices and patterns.
    • Adherence to corporate standards in application design, development, and testing
    • Support future collaborative work team websites related to Enterprise

    Required Skills:

    • JavaScript (NodeJS, ReactJS) and .NET (C#) on the Azure platform
    • Experience building a micro-service architecture
    • Web services and APIs as in RESTful and SOAP
    • Knowledge in network protocols such as TCP/IP, UDP/IP.
    • Ability to quickly learn new programming languages and technologies as requirements evolve
    • Continuous integration skills for real-time testing and diagnostics
    • Agile practices, Design patterns, UML of course, Object-oriented programming, a basic for coders
    • Basics of Cyber Security


    • Extensive experience in Designing, Capacity planning and cluster setup for Hadoop.
    • Hadoop operational expertise such as troubleshooting skills, bottlenecks, basics of memory, CPU, OS, storage, and networks.
    • Good knowledge of Hbase,Hive,Pig and Apache web server
    • Good knowledge of performance tuning ,Monitoring and administration using Cloudera
    • Hands on experience in Unix administration and shell scripting to handle file management and job scheduler

    Good to Have Skills:

    • Familiarity with open source configuration management and deployment tools such as Puppet or Chef.
    • Know ledge of any of the scripting language(Bash, Perl, Python).
    • Good to have knowledge on Nagios, Kafka or any message broker(Active MQ, Rabbit MQ).
    • Good knowledge of Linux and tool like Splunk,Tableau

    Interested candidates send ur profiles to vinod.kumar@datamtaics.com

    • Extensive experience working in Hadoop eco system tools Map Reduce, Hive, Pig, Sqoop
    • Must have experience in designing and building Hadoop based applications, evaluation of tools based on the requirements and have worked with one or more major Hadoop distributions
    • If it is OK for you re war back to me with your updated CV ( sowmya.palepu@anantha.co.in ) then i will share your profile with client
    • Refer your friends and colleagues also
    Unix Hadoop Java Linux
    • Resource who has worked on middleware application and excellent JAVA development with Big Data Hadoop Technologies
    • Quick learner and should have good exposure to direct client facing work culture.
    • 10+ Years Experience designing, developing , deploying & Supporting large scale distributed systems and API development
    • 10+ Years experience MUST in Core Java, Web services, REST JMS technologies and good knowledge of various design patterns with Big Data technologies
    • 5+ Years of Data Serialization format (POF, Thrift, JSON, XML ,AVRO etc)
    • Should have knowledge of building tool like Maven and Ant.
    • Should have knowledge of continuous integration tools like Jenkins
    • Should have experience in software versioning and revision control system like SVN, Git or CVS
    • Should have experience in static source code analyzer tool like PMD,SonarQube,Checkstyle or FindBugs
    • Have knowledge of TDD [Test-driven development] methodolog
    • Knowledge of powermock, mockito or easymock API for unit testing
    • Hands on experience on NOSQL DB, Data Modelling etc
    • 2+ Yrs Hands on experience in Oracle Coherence: Types of caches, Caching Schemes, Cache Services, POF format etc. or similar technologies
    • PLS NOTE: This requires working in shifts(1 PM to 10 PM )
  • We are hring for client

    We have urgent openings on the below requirements, if you are interested, please send us your updated profile in word format.

    Mandatory Skills: Hadoop ( CDH), data ingestion for CDH
    Good to have skills: SAS/ R analytics, QV, Tableau
    Domain: Financial Services

    If interested, please send your updated profile to raj@vedainfo.in along with the following details. ( Mandatory )

    • Full name:
    • Mobile No:
    • Total Experience:
    • Relevant Experience:
    • Notice Period:
    • Current Organization:
    • Current Location:
    • DOB:
    • Current CTC:
    • Expected CTC:

    Please fill all the above details so that we can send the exact information to the client.

    • Relevant Experience: 3.5-4.5 years (Total: 4-8 yrs)
    • UNIX: Shell Scripting (must have), Unix utilities like sed, awk, perl, python
    • Scheduling knowledge (Control M, Autosys, Maestro, TWS, ESP)
    • ETL Skills (Preferred): ETL Mapping Development, ETL standard environment parameters, SDLC, Data Analysis
    • Developer Experience: Of at least 2 year
    • Project Experience: Minimum 2 complete projects over a period of last 2 years, out of which 1 project must be independently leading a team of 3-4.
    • Database(Preferred): SQL Proficient, DB Load / Unload Utilities expert, relevant experience in Oracle, DB2, Teradata (Preferred)
    • Project Profiles: Atleast 2-3 Source Systems, Multiple Targets, simple business transformations with daily, monthly
    • Expected to produce LLD, work with testers, work with PMO and develop ETL Mappings, schedules
    • Primary Skills(Must have) Hadoop , BigData , Unix shell scripting
    • Secondary Skills(Good to have) Oracle, DB2, Teradata (Preferred),
    • hadoop certification preferred
    • Hadoop Exposure/Certification Preferred

    Note: This is a Scheduled Drive - please share your resumes with your availability to farhana.khan@capgemini.com

    Unix Big data Hadoop
    • Must have Hands on Experience to one of the enterprise Hadoop distributions(Cloudera, Hortonworks or MapR)
    • Strong Knowledge in data modeling, HIVE, OOZIE, HDF, Pig and shell script. Good expertise in MapReduce, Sqoop.
    • Ability to write complex hive queries and Sound knowledge in Cassandra, spark and kafka.
    • Knowledge oon Build Tools (MAVEN,ANT) SVN or GIT will be an icing
    • Experience in databases like SQL/MySQL is desirable
    • Product development experience for large scale systems with high volume and high performance requirements with Fundamentals of multi-threading on multi-core systems
    • Experience in product development life-cycle and product process oriented agile development environment.
    • Must be Technically equipped with Java, J2EE, Spring, Hibernate, REST API, JavaScript, Jquery, HTML.
    • Must possess Strong written and verbal communication skills to interact with customers / clients on a regular note.
    • Must function independently with limited supervision, Must act as a Team Mentor and should be a Team Player by proactively involving in highly collaborative environment

    If you are interested kindly send me your updated profile  to rajesh @msr-it.com along with below:

    Current salary:
    Expected Salary:
    Notice Period:
    Reason for Job Change:
    Notice Period:

    • Proven expert level understanding of Cloudera Hadoop and Apache ecosystem namely YARN, Impala, Hive, Flume, HBase, Sqoop, Apache Spark, Apache Storm, Crunch, Java, Oozie, SQOOP, Pig, Scala, Python, Kerberos/Active Directory/LDAP, etc.
    • Proven Experience showcasing technical and operational feasibility of Hadoop Architecture solutions
    • Experience and detailed knowledge of Hadoop development utilizing SyncSort DMX-H, Subversion, SQL Knowledge and equivalent technologies
    • 4 Year Degree Computer Science/Software Engineering or related degree program, or equivalent application development, implementation and operations experience
    • Minimum 3+ years of related database development experience, including Data Architect experience working with 'Big Data' technologies such as Hadoop, ETL tools, and large datasets
    • Excellent verbal and written skills, Proficient with MS office Tools, Strong analytical and problem solving skills.
    • Good hands-on in Java/J2EE (Strong in core Java is mandatory)
    • Good Hands on in Spring, Hibernate & JDBC
    • Good hands-on in the Hadoop EcoSystem (Mapreduce, Spark, Hive and Oozie) - Intermediate to Advanced Level (and AWS cloud is good to have)
  • Conduct training for our corporate clients through out India. Conduct in house modular trainings. Support software development at BitCode.

    Big data Hadoop
    • Experience in Java/J2EE /Hadoop
    • Open Source contributor and technology evangelist
    • Experience in designing/ architecting and implementation of complex projects/ products with considerable data size (GB/ PB) and high complexity
    • Strong Knowledge in any of NoSQL/ Graph databases (Cassandra/ MongoDB/ HBase/CouchDB/ Neo4J etc.)
    • Knowledge of clustered deployment architecture
    • Good in mathematics, specifically statistics
    J2EE Hadoop Java
    • 2—6 years of experience with testing using Python, soapui, Junit, Java, Perl and Scala
    • Basic Networking Knowledge
    • Strong Experience on Python and Java programming skills
    • Experience in Big data technologies ( Hadoop/No SQL databases/streaming platforms/mongo DB) a big plus
    • Strong experience with Automated testing and continuous integration
    • Good in Unix and Linux background with administration skills
    • Experience working with Agile team using test driven development. Short Sprint cycles 2 – 4 weeks.
    • Ability to influence product design in order to meet testability requirements
    • Ability to design test frameworks
    • Ability to deep dive technically in product design and modules designs
    • Knowledge of storage/network protocols a big plus
    • Knowledge of NFS, DFS and associated environments highly desired
    • Hands on knowledge of various java stack trace, memory mapping tools a must
    • Experience with automation on CI environment
    • Experience with scripting languages Python, Perl
    • Experience with analyzing technical requirements and design
    • Should be a self-starter and able to work with minimal guidance
    • Demonstrated experience with data validation testing on large complex projects using PLSQL and big databases
    • Strong knowledge on back end testing on Databases and BI/DW , Rest/SOAP Api’s
    • Understand when to use black box, white box and gray box test approaches
    • ......