Skip to main content

Big Data Architect

Work from Office, Permanent role

Send your CV here

Role & Responsibilities:

  • The decision-making power for data analysis and he/she should also possess the quality of architecting the massive data
  • Should have skills in big data tools and technologies; it includes technologies like the Hadoop, accumulo, MapReduce, Hive, HBase, panoply and redshift
  • Overall 10 years of experience as Principal Consultant, Subject Mater Expert working on Big Data Platforms (Hadoop, Map Reduce, NoSQL, Real Time Streaming, Search, Spark, Java & ETL platforms)
  • Experience with managing and handling very large data repositories, delivering distributed and highly scalable applications
  • End-to-end design and build process of Near-Real Time and Batch Data Pipelines; expertise with SQL and Data modeling working in Agile development process
  • Ability to work with large data sets: Big Data involves large data sets, so applicants must be able to work with highly diverse data in multiple types and formats, and sources
  • Self-starter: They must be able to work with minimal supervision
  • Ability to quickly prototype, architect and build software using latest/greatest technologies
  • Good Customer facing, interpersonal and communication skills
  • Experience in addressing non-functional requirement’s and large application deployment architectures and concerns such as scalability, performance, availability, reliability, security etc
  • Experience in any one or more of the following technologies –

o Experience on Hadoop (Apache/Cloudera/Hortonworks) and/or other Map Reduce Platforms

o Experience on Hive, Pig, Sqoop, Flume and/or Mahout

o Experience on NO-SQL – HBase, Cassandra, MongoDB (Any one NO SQL would do)

o Experience on Spark, Storm, Kafka (Spark and Kafka is a must)

o Experience around Search Platform Solr, Elastic

o Streaming Data load organizing in Hadoop,

  • Good background of Configuration Management/Ticketing systems like Maven/Ant/JIRA etc.
  • Strong in Shell Scripting programming, Java, EDW platforms
  • Knowledge around any Data Integration and/or EDW tools is plus
  • Extensive experience with Enterprise Messaging framework (Kafka or RabbitMQ)
  • Must be able to provide out of the box solutions when ever required.

Preferred Skills:

  • Data Integration, SQL, NO SQL, CLOUDERA , ADVANCED JAVA
  • Teamwork abilities: The big data architect must be able to work in a team-oriented environment with people of diverse skill sets
  • Analytical skills: It is important that big data architects can analyze complex problems using information provided understand customer requests, and provide the appropriate solution
  • Data warehousing

o Strong Data warehousing concepts

o Strong Data Modelling skills (analytical models), preferably with ERWin

o Must have implemented end-to-end DWH projects (discovery, requirements gathering,

HLD/LLD documentation, development, testing, SIT, UAT, deployment, pre and post prod checks etc)

o Must have worked with Teradata / Greenplum / Synapse / Big Query / Snowflake

  • Data Integration / Data Engineering

o Strong logical and analytics abilities in DI/DQ

o Very strong skills in writing logical code for data engineering, which has re-usable capabilities without rework

o Strong knowledge and technical expertise in ETL tools such as Talend / Informatica / Datastage / Pentaho etc.

  • Data Quality

o Strong understanding and experience in DQ frameworks

o Must have implemented atleast 2 projects involving end-to-end DQ frameworks

o Very strong skills in writing logical code for data quality, which has re-usable capabilities without rework

Good to Have Skills:

  • EDW Tools experience like Teradata, Netezza etc.

Qualifications and Education Requirements:

  • Any Degree
Job Category: Tech
Job Type: Full Time
Job Location: India

Apply for this position

Allowed Type(s): .pdf, .doc, .docx