Chief Data Infrastructure Engineer

Location: Columbus, OH

Senior Data Platform Architect

Our client strongly believes in work/life balance. They prioritize impactful work and want their teams to feel challenged and fulfilled. They strive to make a job at the company a meaningful part of a balanced life, rather than a substitute for one.

They create technology that makes information more accessible and useful to people worldwide. Through shared technology services, original research, and community programs, they meet the ever-evolving needs of their users, institutions, and communities. With office locations around the world, their employees are dedicated to providing premier services and software to further this mission.

The Job Details:

Data Services is responsible for foundational platforms for the company's key data assets. Are you passionate about solving global-scale problems, handling petabytes of data, billions of records, and thousands of updates per second? Our client is seeking a senior technology leader to join their team. You will collaborate across the organization to understand business opportunities, current solutions, and future technology opportunities for their core data platform.

The ideal candidate will possess the following skills, experiences, and attitude:
You have a drive to understand new technology, architecture, and languages. Your solutions are based on first-hand knowledge derived from proof of concepts, rapid prototyping, and active collaboration. You know that the best technology solutions solve customer problems, and understanding their needs is crucial for success as a technologist. You have extensive experience with the Hadoop ecosystem, including HBase, HDFS, Spark, Kafka, and other related technologies. You have provided architecture guidance for both on-premises and cloud solutions. Lastly, you have experience developing within the Java/Spring ecosystem, particularly for Search and ETL workflows.

The Senior Data Platform Architect will review the company's big data platform and adjacent application stacks to develop a strategic analysis and future roadmap. They will perform high-level analysis and design of software and systems, including creating, analyzing, designing, modifying, and testing system components. They are expected to communicate strategic level vision to upper management and provide technical direction to development teams, thereby establishing a shared understanding and roadmap for the platform's future.

Responsibilities:

  • Evaluate the existing Hadoop ecosystem, including architecture, infrastructure, and performance, to identify migration challenges and areas for improvement.
  • Develop comprehensive strategies for migration or risk mitigation, including a detailed roadmap, timeline, and resource requirements. Identify suitable technology alternatives.
  • Provide leadership within the organization in their specific area of expertise.
  • Assume the technical lead and act as the architect for major company systems or concepts. Participate in major architectural reviews and plans.
  • Effectively communicate relevant information and trends within their area of expertise to relevant business units within the company.
  • Prototype systems in anticipation of new requirements.
  • Develop functional requirements based on prototype systems.
  • Analyze and solve problems in existing systems.
  • Design, code, and test multiple modules of a system in a timely manner.
  • Ensure project teams plan and participate in load, capacity, and performance analysis and/or testing.
  • Represent the company's position by participating in or leading relevant standards committees, such as ISO, NISO, ACM, and IEEE. Lead the implementation of standards within company systems.
  • Perform other assigned tasks.

Qualifications:

  • Master's degree required, with 8 to 12 years of experience in a high technical role. (Equivalent experience without a Master's degree is 10 to 14 years of experience in a high technical role.)
  • Familiarity with alternative big data technologies, such as Apache Kafka, Apache Flink, Apache Cassandra, or cloud-based solutions like Amazon EMR or Google BigQuery.
  • 5+ years of leadership experience with Hadoop and related technologies, such as Hive, Pig, and Spark.
  • Proficiency in programming languages commonly used in big data processing, such as Java, Scala, or Python.
  • Experience transitioning Hadoop workflows to alternative solutions is strongly preferred.
  • Strong understanding of distributed computing principles, large-scale data processing frameworks, and general big data concepts.
  • Excellent communication and presentation skills.
  • Ability to work independently and as part of a team.
  • Proven experience leading large-scale projects across multiple teams for several years.
  • Experience with multiple complex, business-critical systems is expected.
  • Experience with on-premise big data architectures is preferred.
  • Experience with Graph data architectures and Graph databases is preferred.

Back to Jobs

  • Max. file size: 100 MB.
Share this job Posting: