Remote bigdata Jobs

This Month

Senior Cloud Data Architect/Engineer
azure bigdata kubernetes postgresql apache-spark cloud Apr 04

Introduction to Shield AI Shield AI’s mission is to protect service members and civilians with artificially intelligent systems.  For our world-class team, no idea is too ambitious, and we never stop working to make possible what looks out of reach today.  We are backed by Silicon Valley venture capital firms including Andreessen Horowitz, have been shipping product since 2018, and are growing rapidly. Job

Description Are you a passionate and innovative Senior Cloud Data Architect/Engineer with real world experience architecting big data pipelines? Are you eager to make a positive difference in the world?  Do you want to work alongside mission-driven and values-focused teammates?  Shield AI is just the place for you! 

As a Senior Cloud Data Architect/Engineer on the Fleet team in the Nova Systems Business Unit,  you’ll have the opportunity to work on data infrastructure at Shield AI and play a critical role in the success of our company! 

What you'll do:

  • You will be responsible for driving the architecture and creation of a scalable cloud data pipeline platform
  • You will design and build scalable infrastructure platforms to collect and process large amounts of structured and unstructured data that will be consumed in real-time
  • You will work on automating data pipelines, creating data models, and monitoring and ensuring performance
  • You will conduct root cause analyses of performance and instability of systems with respect to accuracy and performance
  • You will be responsible for making decisions regarding data storage, technology selection, organization, and solution design in conjunction with software engineering and product management teams
  • You will be tasked with identifying and executing on best practices

Projects that you might work on: 

  • Collection and management of data for training and evaluation models and scaled data analysis for Hivemind

People we're looking for have the following required education and experience:

  • 5+ years of demonstrated technical expertise with the following technologies:
  • Cloud platforms (Azure, GCP, or AWS)
  • Data processing frameworks (Spark/MapReduce, Kafka, etc)
  • Distributed data stores (Hadoop, BigQuery/BigTable, Redshift, S3, etc)
  • Expert programming skills (Python, Go, Kotlin, etc)
  • Containerization technologies (Docker, Kubernetes)
  • Relational and NoSQL databases
  • You have real-world experience architecting big data pipelines.
  • You have demonstrated knowledge of cloud computing technologies and current computing trends.
  • You have hands-on, professional experience designing and implementing large scale data pipelines.

Competencies:

  • You have a demonstrated record of working hard, being a trustworthy teammate, holding yourself and others to high standards, and being kind to others


Closing:
If you're interested in being part of an engineering team that works hard, loves to have fun, and is working on some truly meaningful, challenging work; apply now and we can chat further! Shield AI is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender identity, or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. If you have a disability or special need that requires accommodation, please let us know. To conform to U.S. Government regulations, applicant must be a U.S. citizen, lawful permanent resident of the U.S., protected individual as defined by 8 U.S.C. 1324b(a)(3), or eligible to obtain the required authorizations from the U.S. Department of State.

Share this job:

This Year

REMOTE Java, Python or Golang Backend Engineer- Big Data and Data at very large
python java bigdata cassandra go golang Mar 02

NEEDED FOR THIS ROLE

  • JAVA, PYTHON, Golang (intermediate+ to Expert -preferred) in one or more
  • NoSQL DB (Cassandra, etc or time series non structured DB experience)
  • Big Data and Data at very large scale
  • Experienced battle-hardened SW engineer (large distributed systems, large scale)

This is NOT an SRE role!

This is a software engineering role that will work on a team that provides ALL monitoring and will be responsible for developing custom stack for data integration retrieval. The team monitors time series data ingest in upwards of 1.5M+ records a min. 

MUST HAVE

  • Have the ability to develop code to access resident data and then digest and correlate data.
  • Experienced battle hardened SW engineer with distributed systems experience deploying large scale/implementing at large scale.
  • Solid programmer -knows one or more (Java, Python, Golang) and expert at one or more.

THEY ARE NOT looking for script writer

Ideal candidate has experience with timeseries data store (e.g. Cassandra, etc.)

  • Expertise in NoSQL DB at a GIGA scale

The SRE Monitoring Infrastructure team (Note this is NOT an SRE Role) is looking for a backend  software engineer with experience working with large-scale systems and an operational mindset to help scale our operational metrics platform. This is a fantastic opportunity to enable all engineers to monitor and keep our site up and running. In return, you will get to work with a world class team supporting a platform that serves Billions of metrics at Millions of QPS

The engineers  fill the mission-critical role of ensuring that our complex, web-scale systems are healthy, monitored, automated, and designed to scale. You will use your background as an operations generalist to work closely with our development teams from the early stages of design all the way through identifying and resolving production issues. The ideal candidate will be passionate about an operations role that involves deep knowledge of both the application and the product, and will also believe that automation is a key component to operating large-scale systems.

Responsibilities:
• Serve as a primary point responsible for the overall health, performance, and capacity of one or more of our Internet-facing services
• Gain deep knowledge of our complex applications.
• Assist in the roll-out and deployment of new product features and installations to facilitate our rapid iteration and constant growth.
• Develop tools to improve our ability to rapidly deploy and effectively monitor custom applications in a large-scale UNIX environment.
• Work closely with development teams to ensure that platforms are designed with "operability" in mind.
• Function well in a fast-paced, rapidly-changing environment.
• Participate in a 24x7 rotation for second-tier escalations.

Basic Qualifications:
• B.S. or higher in Computer Science or other technical discipline, or related practical experience.
• UNIX/Linux systems administration background.
• Programming skills (Golang, Python)

Preferred Qualifications:
• 5+ years in a UNIX-based large-scale web operations role.
• Golang and/or Python experience
• Previous experience working with geographically-distributed coworkers.
• Strong interpersonal communication skills (including listening, speaking, and writing) and ability to work well in a diverse, team-focused environment with other SREs, Engineers, Product Managers, etc.
• Knowledge of most of these: data structures, relational and non-relational databases, networking, Linux internals, filesystems, web architecture, and related topics- basic knowledge

Team

  • Interact with 4-5 people (stand ups) but not true scrum
  • No interaction with outside teams

Candidate workflow

  • 2 rounds
  • 1 technical coding
  • 1 team fit
Share this job:
Senior Software Engineer/Developer
TopDevz  
hadoop bigdata powerbi senior Jul 04 2020

We are looking for an experienced, senior, Software Engineer/Developer, who is excited to work on one of our many client projects - both greenfield (new) projects as well as legacy (support) projects in that technology stack. This is a remote position.

Skills & Requirements

The following skills are required:

Very experienced (5+ Years) in Software/App Development.
Experienced in Power BI.
Experienced in Hadoop.
Good analytical skills, innovative and detail-oriented.
Good written and verbal communication skills.
Good problem solving skills.
Significant attention to detail when writing code, including good commenting and code documentation skills.

Share this job: