We welcome you to join our
Innovative and Inspiring team!

  • Big Data Architect


    The Big Data Architect primarily utilizes data platform architecture design and implementation methodologies to help our clients achieve their strategic vision.

    Our Mission:

    As one of the most critical members of our customer facing team, we will make sure to put you in front of customers to drive the engagement.

    We are looking for someone who gets excited about talking to customers!!

    Here is how:

    The Big Data Architect owns core aspects of an engagement and may work in partnership with other domain experts to deliver complex and comprehensive enterprise solutions.

    • Should have a broad understanding and have deep experience with distributed data integration solutions, analytics applications, data processing and storage platforms in a highly scalable environment. You should be familiar with relational and NoSQL databases, as well as expert knowledge of the Hadoop ecosystem.
    • Should be skilled in synthesizing business drivers, understanding which factors are relevant in influencing a company’s strategy and how the available data within an organization can contribute to the success of a strategy.
    • You must be comfortable leading both business technology strategy conversations with senior management and deep-dive technical sessions with SMEs.
    • The personality traits needed for a Big Data Architect are to have a certain curiosity to dive into the available data sources and enjoy searching for patterns that could indicate new insights.
    • You should be confident and independent when presenting recommendations.


    • Owning the technical engagement and ultimate success around specific implementation projects.
    • Developing a deep expertise in private and public (e.g. AWS) Cloud technologies.
    • Being subject matter expert and taking on a consultative role as it pertains to Cloud adoption.
    • Ensure that our applications and infrastructure are designed and implemented to the highest security standards thus maintaining and enhancing customer trust
    • Evangelize data security and be an advocate for keeping customer information secure, compliant and audited
    • Demonstrated ability to think strategically about business, product, and technical challenges


    • Demonstrated knowledge of enterprise data architecture design methodologies
    • Deep expertise with at least one NoSQL distribution (e.g. Cassandra, MongoDB)
    • Deep expertise with the Apache Hadoop ecosystem and at least one commercial Hadoop distribution (e.g. Hortonworks, MapR)
    • Familiarity with relational and dimensional data modeling techniques and at least one leading RDBMS platform (e.g. Oracle, MySQL or MS SQL Server)
    • Coding skills with Java and/or Python and familiar with SDLC methodologies
    • Expertise with multiple commercial analytics and reporting platforms
    • Working knowledge of AWS and/or Azure, or experience building High-Availability or High Traffic infrastructure
    • Demonstrate 5-10 years of implementation/consulting experience
    • Strong verbal and written communications skills are a must, as well as the ability to work effectively across internal and external organizations
    • Technical degree; Computer Science or Math background highly desired
    • Industry certifications
    • You will have access to several customer production sites and applications. Confidentiality and non-disclosure of client information is a requirement and non-negotiable
    • Familiarity of a disaster recovery process and reporting security breaches is desired
    • Definite Plus but not mandatory: familiarity with and experience working with the following compliance standards: Sarbanes Oxley, HIPAA, PCI (Add more if needed)
    • Up to 50% travel may be required
  • Senior Lead Big Data Developer (Hadoop)

    Your Mission:

    The Lead Big Data Developer is responsible for driving the ingestion of Enterprise data from multiple internal and external heterogenous sources to provide a semantically consistent view of Enterprise data (via promoting Metadata artifacts as first –class citizens).

    The Opportunity:

    You possesses a strong background and understanding of data including database structures theories, principles and practices. The Lead Big Data mentors and coaches other developers on the big data team. The Lead Big Data Developer is engaged from project inception to production delivery ensuring the solution is aligned with the Enterprise Data Strategy (EDS). The Lead Big Data Developer is a Hadoop developer and evangelist.


    • Ensures Big Data development adherence to principles and policies supporting the EDS.
    • Able to design solutions based on high-level architecture.
    • Analyzes and designs data needs for Data Lake to support Advanced Analytics
    • Translates business requirements into working features.
    • Architect, design, develop, unit test, and document technical solutions.
    • Mentors Big Data Developers on best practices and strategic development.
    • Provides solutions to current (tactical) problems with permanent (strategic) answers.
    • Carries out systematic problem identification, analysis, and resolution.
    • Develops and documents technical design specifications.
    • Leads technical meetings, as required.
    • Conveys ideas clearly and tailors communication based on selected audience (technical and non-technical).
    • Effectively composes technical and non-technical documents such as design specifications, operation guides, process flows, and other technical schematics.
    • Writes and updates task/project status and reports.


    • 2+ years working with and coding in Apache Hadoop, preferably with Hortonworks distribution.
    • Hadoop ecosystem (Hive, Flume, Sqoop, Hadoop API, HDFS (storage formats).
    • Basic Hadoop Administration experience.
    • Experience with NoSQL (preferably Cassandra).
    • 2+ years Java experience.
    • 2+ years working with relational databases (Sybase, Oracle and SQL Server) and ELT with SQL programming to write Extractor Predicates.
    • Shell scripting.
    • Data Flow Modeling.
    • Knowledge of Linux, is a must
  • Senior Big Data/Hadoop Engineer


    • Design and build multi-tenant systems capable of loading and transforming large volume of structured and semi-structured high-velocity data streams
    • Build robust and scalable data infrastructure (both batch processing and real-time) to support needs from internal and external users
    • Implement measures to address data privacy, security and compliance
    • Work with product management, marketing, security research team to identify requirements and evolve data architecture
    • Mentor other engineers in the team regarding technology and best practices.

    Desired Skills and Experience


    • 8+ years of hands-on experience working with software development and enterprise data warehouse solutions.
    • 3+ years of experience in building high performance data processing infrastructure taking into account concurrency, latency and efficiency by profiling, reviewing logs etc.
    • 2+ years working with query engines such as Presto, Hive or Spark SQL
    • Identify the right kind of data serialization techniques and data stores for persisting events
    • Ability to set-up a Hadoop cluster from scratch and maintain, troubleshoot and tune by reviewing logs from various Hadoop services.
    • Strong understanding of the Hadoop stack - HDFS, map-reduce, YARN/Mesos, Zookeeper
    • Working with data processing frameworks such as Spark, Kafka, Storm, Elastic search
    • Experience architecting systems following the REST architectural style
    • Excellent interpersonal, technical and communication skills
    • Ability to learn, evaluate and adopt new technologies
    • Ability to prioritize multiple tasks in a fast-paced environment
    • Bachelor's Degree in computer science or equivalent experience.

    Highly Desirable:

    • Working with AWS - EC2, S3, EMR, Redshift, etc.
    • Working with Spark
    • Basic understanding of statistical analysis.
    • Practical experience with the Scala programming language
    • Application of machine learning to security log analytics
  • Data Engineer

    The engineering team consists of talented, team-oriented individuals who are empowered to take advantage of the latest cloud and distributed technologies to deliver reliable, high-throughput applications.

    As a Data Engineer, you’ll employ your skills on a daily basis to design and build data processing and storage applications to handle millions of transactions per day. You will analyze business requirements and consult with the broader team to ensure successful processing, storage and reporting of our Big Data. You’ll have a wide variety of languages and technologies at your disposal that you can use to solve problems. Your work will directly shape and create our data architecture to ultimately deliver systems that stand up to unpredictable environments at massive scale.

    Technical Skills Needed:

    • 5 years of working with following technology stack: Core languages are Java and C#; RESTful services, jQuery, SQL Server, Hadoop, Hive, HBase, Storm, Spark, and AWS Services such as Kinesis, DynamoDB, Redshift, Lamda, and SQS.
    • Growing track record of success or the groundwork to be an impactful member of the team. We’re looking for candidates that exhibit many of the following skills/attributes:
    • Strong Educational Background
    • Hands-on Engineering experience in
      - Problem solving and debugging skills
      - Writing and deploying code the Linux, Windows, or cloud environments
      - Familiarity with algorithms and performance analysis
      - Willingness to contribute to the operational responsibility of the team’s applications
    • Some experience with one or more of the following:
      - Relational Databases & SQL NoSQL databases (Cassandra, Redis, DynamoDB, MongoDB)
      - Big Data tools such as Hadoop, Hive, EMR, Storm, Spark, DynamoDB, HBase
Site Map  |  Contact Us
Copyright © 2017 Evaya Data Systems. All rights reserved.