Remote big-data Jobs

Last Week

Enterprise Account Executive - Financial Services
executive c saas big data Feb 25
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Enterprise Account Executives play a key role in driving Confluent’s sales activities in region. This role includes activities developing and executing on the go-to-market strategy for your territory. The ideal candidate needs to have experience selling complex Database, Messaging, Big Data, Open Source and/or SaaS into large corporate and multi national companies.

What you will do:

  • Build awareness for Kafka and the Confluent Platform within large enterprises
  • Aggressively prospect, identify, qualify and develop sales pipeline
  • Close business to exceed monthly, quarterly and annual bookings objectives
  • Build strong and effective relationships, resulting in growth opportunities
  • Build and maintain relationships with new and existing Confluent partners

What we are looking for:

  • An ability to articulate and sell the business value of big data and the impact on businesses of all sizes
  • Deep experience selling within the Database, Open Source, Messaging or Big Data space
  • 5+ years experience selling enterprise technology in a fast-paced and competitive marketExperience selling to developers and C level executives
  • Highly motivated, over achiever, team player
  • Strong analytical and writing abilities
  • Exceptional presentation skills
  • Entrepreneurial spirit/mindset, flexibility toward dynamic change
  • Goal oriented, with a track record of overachievement (President’s Club, Rep of the Year, etc.)

Why you will enjoy working here:

  • We’re solving hard problems that are relevant in every industry
  • Your growth is important to us, we want you to thrive here
  • You will be challenged on a daily basis
  • We’re a company that truly values a #oneteam mindset
  • We have great benefits to support you AND your family
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

#LI-NF1

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Project Management Curriculum Writer
project-management agile kanban data science big data cloud Feb 22

Project Management Curriculum Writer

  • Education
  • Remote
  • Contract

Who We Are Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. We provide 1-on-1 learning through our network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development, data science, and design, with in-person communities in up-and-coming tech hubs around the U.S. To join the Thinkful network visit thinkful.com.

Job Description Thinkful is launching a new Technical Project Management program which aims to be the best in-class remote, part-time Technical Project Management program offered today. As part of this effort, we're looking for a Technical Project Management subject matter expert to join us in executing on our content roadmap for this exciting new program. You will be creating the backbone of a new program that propels people from a background in academia and the sciences into an impactful career as Technical Project Manager. You'll produce written content, lesson plans including instructor notes and student activity descriptions, presentation decks, assessments, learning objectives and written content, all to support our students as they learn the core skills of data science. Your work product will be extremely impactful, as it forms the core asset around which the daily experience of our students will revolve. 

Responsibilities

  • Consistently deliver content that meets spec and is on time to support our program launch roadmap.
  • Create daily lesson plans consisting of 
  • Presentation decks that instructors use to lecture students on a given learning objective.
  • Instructor notes that instructors use alongside 
  • Activity descriptions — these are notes describing tasks students complete together in order to advance the learning objective in a given lecture.
  • Creates curriculum checkpoint content on specific learning objectives. In addition to the in-class experience, our students also spend time reading and completing tasks for a written curriculum hosted on the Thinkful platform. 
  • Creates code assets where necessary to support lesson plans, student activities, and written curriculum content.
  • Iterates on deliverables based on user feedback

Requirements

  • 3+ years of hands-on Technical Project Management industry experience 
  • Demonstrated subject matter expert in Technical Project Management 
  • Managing projects using Agile, Kanban and six Sigma methodologies
  • Work on multiple projects, all complexity levels, in an environment with changing priorities
  • Change management expertise 
  • Web application development experience 
  • Running large scale big data projects and or AWS cloud based projects
  • Collaborative.You enjoy partnering with people and have excellent project management skills and follow through
  • Excellent writing skills. You've got a gift for writing about complicated concepts in a beginner-friendly way. You can produce high-quality prose as well as high-quality presentations.

Compensation and Benefit

  • Contract position with a collaborative team
  • Ability to work remotely with flexible hours 
  • Access to all available course curriculum for personal use
  • Membership to a global community of over 500 Software Engineers, Developers, and Data Scientists who, like you, want to keep their skills sharp and help learners break into the industry
Share this job:
Big Data ETL, Architecture
amazon-redshift amazon-redshift-spectrum postgis amazon-s3 data-structures big data Feb 20

Lean Media is looking for experts to help us with the ongoing import, enrichment (including geospatial), and architecture of big datasets (millions to billions of records at a time).

Our infrastructure and tech stack includes:

  • Amazon Redshift, Spectrum, Athena
  • AWS Lambda
  • AWS S3 Data Lakes
  • PostgreSQL, PostGIS
  • Apache Superset

We are looking for expertise in:

  • Building efficient ETL pipelines, including enrichments
  • Best practices regarding the ongoing ingestion of big datasets from disparate sources
  • High performance enrichment of geospatial data
  • Optimizing data structures as they relate to achieving performant queries via analytics tools
  • Architecting a sustainable data infrastructure supporting all of the above

While this posting is for a contract position, we are open to short projects, ongoing engagements, and even full time employment opportunities. If you have a high degree of skill and experience in the area of big data architecture in AWS, then please let us know!

Share this job:

This Month

Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Cloud Architect for Enterprise AI - Remote
Dataiku  
cloud data science big data linux aws azure Feb 18
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Enterprise AI Platform (Dataiku DSS)  to an ever growing customer base. 

As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build and run their Data Science and AI Enterprise Platforms.

This role requires adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines,  and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.

The position can be based remotely.

Responsibilities

  • Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
  • Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
  • Deploy Dataiku DSS in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
  • Design and build reference architectures, howtos, scripts and various helpers  to make the deployment and maintenance of Dataiku DSS smooth and easy
  • Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
  • Provide advanced support for strategic customers on deployment and scalability issues
  • Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
  • Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform

Requirements

  • Strong Linux system administration experience
  • Grit when faced with technical issues. You don’t rest until you understand why it does not work.
  • Comfort and confidence in client-facing interactions
  • Ability to work both pre and post sale
  • Experience with cloud based services like AWS, Azure and GCP
  • Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
  • Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
  • Some experience with Python
  • Familiarity with Ansible or other application deployment tools

Bonus points for any of these

  • Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
  • Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
  • Some knowledge in data science and/or machine learning
  • Some knowledge of Java

Benefits

  • Work on the newest, best, big data technologies for a unicorn startup
  • Consult on AI infrastructure for some of the largest companies in the world
  • Equity
  • Opportunity for international exchange to another Dataiku office
  • Attend and present at big data conferences
  • Startup atmosphere: Free foods and drinks, international atmosphere, general good times and friendly people


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Site Reliability Engineer
hadoop linux bigdata python ruby c Feb 14

The Wikimedia Foundation is hiring two Site Reliability Engineers to support and maintain (1) the data and statistics infrastructure that powers a big part of decision making in the Foundation and in the Wiki community, and (2) the search infrastructure that underpins all search on Wikipedia and its sister projects. This includes everything from eliminating boring things from your daily workflow by automating them, to upgrading a multi-petabyte Hadoop or multi-terabyte Search cluster to the next upstream version without impacting uptime and users.

We're looking for an experienced candidate who's excited about working with big data systems. Ideally you will already have some experience working with software like Hadoop, Kafka, ElasticSearch, Spark and other members of the distributed computing world. Since you'll be joining an existing team of SREs you'll have plenty of space and opportunities to get familiar with our tech (AnalyticsSearchWDQS), so there's no need to immediately have the answer to every question.

We are a full-time distributed team with no one working out of the actual Wikimedia office, so we are all together in the same remote boat. Part of the team is in Europe and part in the United States. We see each other in person two or three times a year, either during one of our off-sites (most recently in Europe), the Wikimedia All Hands (once a year), or Wikimania, the annual international conference for the Wiki community.

Here are some examples of projects we've been tackling lately that you might be involved with:

  •  Integrating an open-source GPU software platform like AMD ROCm in Hadoop and in the Tensorflow-related ecosystem
  •  Improving the security of our data by adding Kerberos authentication to the analytics Hadoop cluster and its satellite systems
  •  Scaling the Wikidata query service, a semantic query endpoint for graph databases
  •  Building the Foundation's new event data platform infrastructure
  •  Implementing alarms that alert the team of possible data loss or data corruption
  •  Building a new and improved Jupyter notebooks ecosystem for the Foundation and the community to use
  •  Building and deploying services in Kubernetes with Helm
  •  Upgrading the cluster to Hadoop 3
  •  Replacing Oozie by Airflow as a workflow scheduler

And these are our more formal requirements:

  •    Couple years experience in an SRE/Operations/DevOps role as part of a team
  •    Experience in supporting complex web applications running highly available and high traffic infrastructure based on Linux
  •    Comfortable with configuration management and orchestration tools (Puppet, Ansible, Chef, SaltStack, etc.), and modern observability       infrastructure (monitoring, metrics and logging)
  •    An appetite for the automation and streamlining of tasks
  •    Willingness to work with JVM-based systems  
  •    Comfortable with shell and scripting languages used in an SRE/Operations engineering context (e.g. Python, Go, Bash, Ruby, etc.)
  •    Good understanding of Linux/Unix fundamentals and debugging skills
  •    Strong English language skills and ability to work independently, as an effective part of a globally distributed team
  •    B.S. or M.S. in Computer Science, related field or equivalent in related work experience. Do not feel you need a degree to apply; we value hands-on experience most of all.

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible international workers' benefits are specific to their location and dependent on their employer of record

Share this job:
Data Engineer
NAVIS  
hadoop web-services python sql etl machine learning Feb 11

NAVIS is excited to be hiring a Data Engineer for a remote, US-based positionCandidates based outside of the US are not being considered at this time.  This is a NEW position due to growth in this area. 

Be a critical element of what sets NAVIS apart from everyone else!  Join the power behind the best-in-class Hospitality CRM software and services that unifies hotel reservations and marketing teams around their guest data to drive more bookings and revenue.

Our Guest Experience Platform team is seeking an experienced Data Engineer to play a lead role in the building and running of our modern big data and machine learning platform that powers our products and services. In this role, you will responsible for building the analytical data pipeline, data lake, and real-time data streaming services.  You should be passionate about technology and complex big data business challenges.

You can have a huge impact on everything from the functionality we deliver for our clients, to the architecture of our systems, to the technologies that we are adopting. 

You should be highly curious with a passion for building things!

Click here for a peek inside our Engineering Team


DUTIES & RESPONSIBILITIES:

  • Design and develop business-critical data pipelines and related back-end services
  • Identification of and participation in simplifying and addressing scalability issues for enterprise level data pipeline
  • Design and build big data infrastructure to support our data lake

QUALIFICATIONS:

  • 2+ years of extensive experience with Hadoop (or similar) Ecosystem (MapReduce, Yarn, HDFS, Hive, Spark, Presto, HBase, Parquet)
  • Experience with building, breaking, and fixing production data pipelines
  • Hands-on SQL skills and background in other data stores like SQL-Server, Postgres, and MongoDB
  • Experience with continuous delivery and automated deployments (Terraform)
  • ETL experience
  • Able to identify and participate in addressing scalability issues for enterprise level data
  • Python programming experience

DESIRED, BUT NOT REQUIRED SKILLS:

  • Experience with machine learning libraries like scikit-learn, Tensorflow, etc., or an interest in picking it up
  • Experience with R to mine structured and unstructured data and/or building statistical models
  • Experience with Elasticsearch
  • Experience with AWS services like Glue, S3, SQS, Lambda, Fargate, EC2, Athena, Kinesis, Step Functions, DynamoDB, CloudFormation and CloudWatch will be a huge plus

POSITION LOCATION:

There are 3 options for the location of this position (candidates based outside the US are NOT being considered at this time):

  • You can work remotely in the continental US with occasional travel to Bend, Oregon
  • You can be based at a shared office space in the heart of downtown Portland, Oregon
  • You can be based at our offices in Bend, Oregon (relocation assistance package available)

Check out this video to learn more about the Tech scene in Bend, Oregon


NAVIS OFFERS:

  • An inclusive, fun, values-driven company culture – we’ve won awards for it
  • A growing tech company in Bend, Oregon
  • Work / Life balance - what a concept!
  • Excellent benefits package with a Medical Expense Reimbursement Program that helps keep our medical deductibles LOW for our Team Members
  • 401(k) with generous matching component
  • Generous time off plus a VTO day to use working at your favorite charity
  • Competitive pay + annual bonus program
  • FREE TURKEYS (or pies) for every Team Member for Thanksgiving (hey, it's a tradition around here)
  • Your work makes a difference here, and we make a huge impact to our clients’ profits
  • Transparency – regular All-Team meetings, so you can stay in-the-know with what’s going on in all areas our business
Share this job:
VP, Data Science & Engineering
machine-learning hadoop data science c machine learning big data Feb 10

The Wikimedia Foundation is seeking an experienced executive to serve as Vice President of Data Science & Engineering for our Technology department. At the Wikimedia Foundation, we operate the world’s largest collaborative project: a top ten website, reaching a billion people globally every month, while incorporating the values of privacy, transparency and community that are so important to our users. 

Reporting to the Chief Technology Officer, the VP of Data Science & Engineering is a key member of the Foundation’s leadership team and an active participant in the strategic decision making framing the work of the technology department, the Wikimedia Foundation and the Wikimedia movement.

This role is responsible for planning and executing an integrated multi-year data science and engineering strategy spanning our work in artificial intelligence, machine learning, search, natural language processing and analytics. This strategy will interlock with and support the larger organization and movement strategy in service of our vision of enabling every human being to share freely in the sum of human knowledge.

Working closely with other Technology and Product teams, as well as our community of contributors and readers, you’ll lead a team of dedicated directors, engineering managers, software engineers, data engineers, and data scientists who are shaping the next generation of data usage, analysis and access across all Wikimedia projects.

Some examples of our teams work in the realm of data science and data engineering can be found on our blog, including deeper info on our work in improving edit workflows with machine learning, our use of Kafka and Hadoop or our analysis of analysis of people falling into the “Wikipedia rabbit hole”. As of late we have been thinking on how to best identify traffic anomalies that might indicate outages or, possibly, censorship.  

You are responsible for:

  • Leading the technical and engineering efforts of a global team of engineers, data scientists and managers focused on our efforts in productionizing artificial intelligence, data science, analytics, machine learning and natural language processing models as well as data operations. These efforts currently encompass three teams: Search Platform, Analytics and Scoring Platform (Machine Learning Engineering)
  • Working closely with our Research, Architecture, Security, Site Reliability and Platform teams to define our next generation of data architecture, search, machine learning and analytics infrastructure
  • Creating scalable engineering management processes and prioritization rubrics
  • Developing the strategy, plan, vision, and the cross-functional teams to create a holistic data strategy for Wikimedia Foundation taking into account our fundamental values of transparency, privacy, and collaboration and in collaboration with internal and external stakeholders and community members.
  • Ensure data is available, reliable, consistent, accessible, secure, and available in a timely manner for external and internal stakeholders and in accordance with our privacy policy.
  • Negotiating shared goals, roadmaps and dependencies with finance, product, legal and communication departments
  • Contributing to our culture by managing, coaching and developing our engineering and data teams
  • Illustrating your success in making your mark on the world by collaboratively measuring and adapting our data strategy within the technology department and the broader Foundation
  • Managing up to 5 direct reports with a total team size of 20

Skills and Experience:

  • Deep experience in leading data science, machine learning, search or data engineering teams that is able to separate the hype in the artificial intelligence space from the reality of delivering production ready data systems
  • 5+ years senior engineering leadership experience
  • Demonstrated ability to balance competing interests in a complex technical and social environment
  • Proven success at all stages of the engineering process and product lifecycle, leading to significant, measurable impact.
  • Previous hands-on experience in production big data and machine learning environments at scale
  • Experience building and supporting diverse, international and distributed teams
  • Outstanding oral and written English language communications

Qualities that are important to us:

  • You take a solutions-focused approach to challenging data and technical problems
  • A passion for people development, team culture and the management of ideas
  • You have a desire to show the world how data can be done while honoring the user’s right to privacy

Additionally, we’d love it if you have:

  • Experience with modern machine learning, search and natural language processing platforms
  • A track record of open source participation
  • Fluency or familiarity with languages in addition to English
  • Spent time having lived or worked outside your country of origin
  • Experience as a member of a volunteer community

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible non-US benefits are specific to location and dependent on employer of record

Share this job:
Senior Data Engineer
Acast  
senior java scala big data docker cloud Feb 10
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

We are looking for a Senior Data Engineer to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as core dataset for business critical use-cases such as payouts to our podcasters. This team’s ambition is to transform our data into insights. The products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 

In this role you will work with other engineers, product owners within a cross functional agile team.

You

  • 3+ years of experience of building robust big data ETL pipelines within Hadoop Ecosystem: Spark, Hive, Presto, etc
  • Are proficient in Java or Scala and Python
  • Experience with AWS cloud environment: EMR, Glue, Kinesis, Athena, DynamoDB, Lambda, Redshift, etc.
  • Have strong knowledge in SQL, NoSQL database design and modelling, and knowing the differences on modern big data systems and traditional data warehousing
  • DevOps and infrastructure as code experience (a plus), familiar with tools like Jenkins, Ansible, Docker, Kubernetes, Cloudformation, Terraform etc
  • Advocate agile software development practices and balance trade-offs in time, scope and quality
  • Are curious and a fast learner who can adapt quickly and enjoy a dynamic and ever-changing environment

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our engineering team is mostly located in central Stockholm, but with a remote first culture we’re able to bring on people who prefer full time remote work from Sweden, Norway, UK, France and Germany.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Solutions Architect - Pacific Northwest
java python scala big data linux cloud Feb 07
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 60 -70% travel expected.
Anywhere in Pacific NorthWest

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Data Visualization Engineer
data science machine learning big data linux mysql backend Jan 31
We are looking for a dynamic and talented Data Visualization Engineer that has passion for Data and using cutting-edge tools and data-based insights to turn their vision and ability into results and actionable solutions for our Clients. The successful candidate will leverage their talents and skills to design, develop and implement graphical representations of information and data by using visual elements like charts, graphs, and maps, and a variety of data visualization tools. You will own, architect, design, and implement a Data Visualization platform that leverages big data, Data Warehouses, data visualization suites, and cutting-edge open source technologies. You will drive the vision of our Big Data Visualization platform that’s scaleable, interactive, and real-time to support our state-of-art data processing framework for our Geospatial-oriented platform. They must have a proven ability to drive results with their data-based insights. The right candidate will have a passion for discovering solutions hidden in large datasets and working with stakeholders to improve mission outcomes. Do you want to take your ideas and concepts into real life Mission-Critical Solutions? Do you want to work with latest bleeding-edge Technology? Do you want to work with a Dynamic, World-Class Team of Engineers, while learning and developing your skills and your Career? You can do all those things at Prominent Edge! 

We are a small company of 24+ developers and designers who put themselves in the shoes of our customers and make sure we deliver strong solutions. Our projects and the needs of our customers vary greatly; therefore, we always choose the technology stack and approach that best suits the particular problem and the goals of our customers. As a result, we want developers who do high-quality work, stay current, and are up for learning and applying new technologies when appropriate. We want engineers who have an in-depth knowledge of Amazon Web Services and are up for using other infrastructures when needed. We understand that for our team to perform at its best, everyone needs to work on tasks that they enjoy. Most of our projects are web applications which and often have a geospatial aspect to them. We also really take care of our employees as demonstrated in our exceptional benefits package. Check out our website at https://prominentedge.com for more information.

Required Skills:

  • A successful candidate will have experience in many (if not all) of the following technical competencies including: data visualization, data engineering, data science, statistics and machine learning, coding languages, databases, and reporting technologies.
  • Ability to design develop and implement graphical representations of information and data. By using visual elements like charts, graphs, and maps, data visualization tools.
  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Bachelor's Degree in Computer Science, Data Science, Machine Learning, AI or related field or equivalent experience.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.

Desired skills:

  • At least 5 years of experience in data engineering, data science, and/or data visualization.
  • Design and develop ETL and storage for the new big data platform with open source technologies such as Kafka/RabbitMQ/Redis, Spark, Presto, Splunk.
  • Create insightful visualizations with dashboarding and charting tools such as Kibana / Plottly / Matplotlib / Grafana / Tableau.
  • Strong proficiency with a backend database such as Postgres, MySQL, and/or familiarity with NoSQL databases such as Cassandra, DynamoDB or MongoDB.
  • Strong background in scripting languages.
  • Capable of working in a linux server environment.
  • Experience or interest in working on multiple projects with multiple product teams.
  • Excellent verbal and written communication skills along with the ability to present technical data and enjoys working with both technical and non-technical audiences.
  • Current U.S. security clearance, or ability to obtain a U.S. security clearance.
  • Master's Degree or PhD. in Computer Science, Data Science, Machine Learning, AI or related field is a plus.

W2 Benefits:

  • Not only you get to join our team of awesome playful ninjas, we also have great benefits:
  • Six weeks paid time off per year (PTO+Holidays).
  • Six percent 401k matching, vested immediately.
  • Free PPO/POS healthcare for the entire family.
  • We pay you for every hour you work. Need something extra? Give yourself a raise by doing more hours when you can.
  • Want to take time off without using vacation time? Shuffle your hours around in any pay period.
  • Want a new MacBook Pro laptop? We'll get you one. If you like your MacBook Pro, we’ll buy you the new version whenever you want.
  • Want some training or to travel to a conference that is relevant to your job? We offer that too!
  • This organization participates in E-Verify.

Share this job:

This Year

Consulting Engineer
java python scala big data linux azure Jan 17
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Consulting Engineers drive customer success by helping them realize business value from the burgeoning flow of real-time data streams in their organizations. In this role you’ll interact directly with our customers to provide software, development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases.  

Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art streaming data infrastructure alongside colleagues who are widely recognized as leaders in this space.

Promoting Confluent and our amazing team to the community and wider public audience is something we invite all our employees to take part in.  This can be in the form of writing blog posts, speaking at meetups and well known industry events about use cases and best practices, or as simple as releasing code.

While Confluent is headquartered in Palo Alto, you can work remotely from any location on the East Coast of the United States as long as you are able to travel to client engagements as needed

A typical week at Confluent in this role may involve:

  • Preparing for an upcoming engagement, discussing the goals and expectations with the customer and preparing an agenda
  • Researching best practices or components required for the engagement
  • Delivering an engagement on-site, working with the customer’s architects and developers in a workshop environment
  • Producing and delivering the post-engagement report to the customer
  • Developing applications on Confluent Kafka Platform
  • Deploy, augment, upgrade Kafka clusters
  • Building tooling for another team and the wider company
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles
  • Honing your skills, building applications, or trying out new product features

Required Skills:

  • Deep experience building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field
  • Ability to travel up to 60-75% of your time to client engagements

Nice to have:

  • Experience using Amazon Web Services, Azure, and/or GCP for running high-throughput systems
  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Python, Scala, or Go
  • Experience with configuration and management tools such as Ansible, Teraform, Puppet, Chef
  • Experience writing to network-based APIs (preferably REST/JSON or XML/SOAP)
  • Knowledge of enterprise security practices and solutions, such as LDAP and/or Kerberos
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Back End DevOps Engineer
aws security kubernetes shell python devops Jan 16

As more companies adopt public cloud infrastructure and the increase sophistication and harm caused by cyber attacks, the ability to safeguard companies from these threats have never been more urgent.  

Lacework’s novel approach to security fundamentally converts cyber security into a big data problem.  They are a startup based in Silicon Valley that applies large scale data mining and machine learning to public cloud security.  Within a cloud environment (AWS, GCP, Azure), their technology captures all communication between processes/users/external machines and uses advanced data analytics and machine learning techniques to detect anomalies that indicate potential security threats and vulnerabilities. The company is led by an experienced team who have built large scale systems at Google, Paraccel (Amazon Redshift), Pure Storage, Oracle, and Juniper networks.  Lacework is well funded by a tier one VC firm and is based in San Jose, CA.

They are looking for a Senior DevOps engineer with strong AWS and Kubernetes experience who is excited about building an industry leading, next generation Cloud Security System.

You will be a part of the team that architects, designs, and implements highly scalable distributed systems that provide availability, scalability and performance guarantees. This is a unique and rare opportunity to get in on the ground floor and help shape their technologies, products and business.

Roles/Responsibilities

  • Assist in managing Technical Operations, Site Reliability, production operations and engineering environments 
  • Run production operations for their SaaS product
    • Manage the monitoring System
    • Debugging live production issues
    • Manage Software release roll-out
  • Use your engineering skills to promote platform scalability, reliability, manageability  and cost efficiency
  • Work with the engineering and QA teams to provide your valuable feedback about how to improve the product
  • Participate in on-call rotations (but there is really not a lot of work since you will automate everything!)

Requirements:

  • 4+ years of relevant experience (Technical Operation, SRE, System Administration)
  • AWS experience 
  • Experienced Scripting skills Shell and / or Python 
  • Eager to learn new technologies
  • Ability to define and follow procedures
  • Great communication skills
  • Computer Science degree 
Share this job:
Principal Product Manager - Couchbase Server, Analytics
 
manager product manager big data cloud Jan 15
Forbes listed Couchbase as one of the market’s next billion dollar 'Unicorns' and the Couchbase NoSQL data platform is widely used by leading enterprises to power their business-critical applications.
 
We are looking for a Principal Product Manager with a strong technical background in database systems – someone with product management experience and a good understanding of mission-critical transactional and analytical use cases
 
As the Principal Product Manager for Couchbase Analytics, you will define the product roadmap and requirements for our industry-leading NoSQL data platform. You will also work with our marketing team to position and generate awareness for our platform, and enable the field teams to help our customers successfully deploy our solutions.

Responsibilities

  • Drive Couchbase Analytics strategy and roadmap including recommendations on tools, vendors/partners, and technologies
  • Engage with customers to understand their use cases and requirements, influence and develop the product roadmap and identify high-value integrations with the broader analytics ecosystem
  • Work with the engineering teams to prioritize and drive feature specifications from concept to general availability
  • Work with internal functional groups (engineering, marketing, support, sales, etc.) as well as customers and partners to drive feature priorities, product releases and customer engagements
  • Contribute to internal and external product-related content like sales collateral, feature blogs and documentation
  • Define and track KPIs to measure success of product launches and new features

Requirements

  • 5+ years of experience in the information management industry
  • 5+ years highly-focused product management experience in a fast-paced technology company
  • BS or MS degree in Computer Science
  • Domain expert in databases is a must with deep knowledge of the database landscape
  • Experience with database and analytics systems, including BI and ML tools
  • Critical thinker with a strong bias towards action, able to go deep into technology and relate technical enhancements to customer use cases
  • Track and record of exceptional performance and work as a collaborative team player
  • Clear, crisp communicator with strong written and oral communication skills with superior presentation skills
  • Experience in a start-up environment a strong plus
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Big Data Engineer
Infiot  
java sql bigdata big data python linux Jan 08

We are looking for a Software Engineer to work with us primarily in the data ingestion pipelines and analytics databases of our cloud platform. The qualified candidate will join a team of full stack engineers who work in front end, back end and devops initiatives needed to accomplish the Infiot vision. 

The ideal candidate would have most of the following qualifications. We will consider candidates who have some of these qualifications but are interested in working on this skillset.

  • Experience with scalable cloud native multi-tenant architectures
  • Fluency in Java
  • Experience with HTTP based APIs (REST or GraphQL)
  • Experience in big data frameworks (Apache Beam, Spark, etc)
  • Experience in SQL and NoSQL databases with focus on scale
  • Some Linux command line, python and make
  • Ability to work in a team setting
  • Passion for automation
  • Passion for personal productivity improvement
  • Passion for quality and customer satisfaction
  • Passion for development driven testing
  • MS/PhD in Computer Science or equivalent knowledge/experience
Share this job:
Senior Software Engineer, Data Pipeline
java scala go elasticsearch apache-spark senior Dec 31 2019

About the Opportunity

The SecurityScorecard ratings platform helps enterprises across the globe manage the cyber security posture of their vendors. Our SaaS products have created a new category of enterprise software and our culture has helped us be recognized as one of the 10 hottest SaaS startups in NY for two years in a row. Our investors include both Sequoia and Google Ventures. We are scaling quickly but are ever mindful of our people and products as we grow.

As a Senior Software Engineer on the Data Pipeline Platform team, you will help us scale, support, and build the next-generation platform for our data pipelines. The team’s mission is to empower data scientists, software engineers, data engineers, and threat intelligence engineers accelerate the ingestion of new data sources and present the data in a meaningful way to our clients.

What you will do:

Design and implement systems for ingesting, transforming, connecting, storing, and delivering data from a wide range of sources with various levels of complexity and scale.  Enable other engineers to deliver value rapidly with minimum duplication of effort. Automate the infrastructure supporting the data pipeline as code and deployments by improving CI/CD pipelines.  Monitor, troubleshoot, and improve the data platform to maintain stability and optimal performance.

Who you are:

  • Bachelor's degree or higher in a quantitative/technical field such as Computer Science, Engineering, Math
  • 6+ years of software development experience
  • Exceptional skills in at least one high-level programming language (Java, Scala, Go, Python or equivalent)
  • Strong understanding of big data technologies such as Kafka, Spark, Storm, Cassandra, Elasticsearch
  • Experience with AWS services including S3, Redshift, EMR and RDS
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions

What to Expect in Our Hiring Process:

  • Phone conversation with Talent Acquisition to learn more about your experience and career objectives
  • Technical phone interview with hiring manager
  • Video or in person interviews with 1-3 engineers
  • At home technical assessment
  • Video or in person interview with engineering leadership
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Manager, Solutions Engineering - East
java big data linux cloud dot net Dec 12 2019
The Couchbase Solutions Engineering Manager leads a pre-sales engineering team through any and all sales engagements. This role directs the activities and goals of the Regional Sales Engineering team they are responsible for. 

The position works closely with their direct reports, Regional Sales Director, Regional Solutions Engineering Director, and territory-based Enterprise Sales Representatives to qualify prospective clients for Couchbase products and services within the assigned territory. Our Solutions Engineers are the primary technical field experts, responsible for actively driving and managing the technical part of a sales engagement. 

The Solutions Engineering Manager will provide guidance/mentoring to Solutions Engineers. The role requires a high degree of organization as it involves managing a team of 5-10 Solutions Engineers and multiple opportunities at once. In this exciting role, you will become an expert explaining NoSQL advantages, how Couchbase Server works and how it can be used to solve the customer’s problems; all with a good dose of getting customers excited about using this new approach for fast & scalable databases. 

The Solutions Engineering Manager is responsible for owning or guiding their team through the POC, RFI and/or RFP processes. The role will interface with many different types of Enterprise customers and their management teams. Ensuring the advancement of Solutions Engineers in their skills and knowledge to effectively compete in a global marketplace is an important aspect of this role. Additionally, a proven track record of success and demonstrated ability to effectively engage with sales teams are also key. 

Location: Eastern, USA - Remote

Responsibilities

  • Hire and lead a world-class team focused on delivering a unique, differentiated customer experience

  • Identify technical and soft skill training needs, perform assessments, and provide feedback to direct reports. Handle escalations and address conflicts

  • Grow capability of overall team to deliver training and consulting to customers to grow their product adoption

  • Partner with the team to evaluate new technical solutions to meet or exceed prospect and customer requirements

  • Build development plans for team members to ensure successful on-boarding and continuing education of the team

  • Demonstrate to customers how to solve their problems and meet their requirements with Couchbase Server and get them excited about NoSQL database technology

  • Develop and maintain an expert understanding of all Couchbase products and services. Establish and continuously update best practices for technical customer engagements in the fast-paced world of NoSQL 

  • Work closely with the sales team on account strategy and identifying additional opportunities in existing accounts including strategizing digital transformation initiatives for customers

  • Balance the workload of the Solutions Engineering team in concert with input from the Enterprise Sales Executive and Regional Sales Leaders

  • Mentor Solutions Engineering teams by providing hands-on technical guidance for building solutions such as microservices, Linux, Kubernetes, cloud deployments, databases, and messaging systems for pre-sales engagements 

  • Ensure the success of customer POC / Pilots through effective management of acceptance criteria and issue escalation/resolution

  • Support and participate with the Solutions Engineers in performing advanced technical presentations for customers, and prospects, remotely and in-person

  • Develop and deliver exceptional company/product presentations and demonstrations to manage, and maintain strong relationships with key customers

  • Work with all technical levels from managers, architects and developers and explain Couchbase Server and its uses

  • Be the technical product expert for customers and stay up to date on the NoSQL competitive landscape

  • Work with Product Management and Engineering to provide feedback from the field and represent the customer perspective as well as identify and write internal and external technical collateral

  • Establish, track, monitor and report on actionable metrics and KPIs for product adoption 

  • Represent Couchbase at conferences, industry, and sales events

Qualifications

  • 7+ years of experience serving in the capacity of a pre-sales engineer

  • Ability to teach other members of the team and effectively manage a team of highly skilled Sales Engineers 

  • Experience with traditional RDBMS including schema modeling, performance tuning and configuration

  • Proven ability to provide technical leadership to the account team and engineers

  • Hands-on administration and troubleshooting experience with x86 operating systems (Linux, Windows, Mac OS), networking and storage architectures

  • Familiarity with NoSQL databases or other distributed high-performance systems

  • Must be able to coordinate across various groups and functional teams

  • Ability to apply solutions, technology, and products to a business opportunity

  • Willingness to travel throughout the assigned region both by air and by car

Minimum Qualifications

  • Excellent communication and presentation skills with an ability to present technical solutions concisely to any audience

  • Experience engaging with developers and programming experience in at least one of the following: Java/.NET/PHP

  • Demonstrated passion for diving into technical issues and solving customer problems

  • Demonstrated critical thinking and advanced troubleshooting skills and qualities 

  • Ability to travel a minimum 25% of the time is required
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Manager, Solutions Engineering - West
java big data linux cloud dot net Dec 12 2019
The Couchbase Solutions Engineering Manager leads a pre-sales engineering team through any and all sales engagements. This role directs the activities and goals of the Regional Sales Engineering team they are responsible for. 

The position works closely with their direct reports, Regional Sales Director, Regional Solutions Engineering Director, and territory-based Enterprise Sales Representatives to qualify prospective clients for Couchbase products and services within the assigned territory. Our Solutions Engineers are the primary technical field experts, responsible for actively driving and managing the technical part of a sales engagement. 

The Solutions Engineering Manager will provide guidance/mentoring to Solutions Engineers. The role requires a high degree of organization as it involves managing a team of 5-10 Solutions Engineers and multiple opportunities at once. In this exciting role, you will become an expert explaining NoSQL advantages, how Couchbase Server works and how it can be used to solve the customer’s problems; all with a good dose of getting customers excited about using this new approach for fast & scalable databases. 

The Solutions Engineering Manager is responsible for owning or guiding their team through the POC, RFI and/or RFP processes. The role will interface with many different types of Enterprise customers and their management teams. Ensuring the advancement of Solutions Engineers in their skills and knowledge to effectively compete in a global marketplace is an important aspect of this role. Additionally, a proven track record of success and demonstrated ability to effectively engage with sales teams are also key. 

Location: Western USA, Remote

Responsibilities

  • Hire and lead a world-class team focused on delivering a unique, differentiated customer experience

  • Identify technical and soft skill training needs, perform assessments, and provide feedback to direct reports. Handle escalations and address conflicts

  • Grow capability of overall team to deliver training and consulting to customers to grow their product adoption

  • Partner with the team to evaluate new technical solutions to meet or exceed prospect and customer requirements

  • Build development plans for team members to ensure successful on-boarding and continuing education of the team

  • Demonstrate to customers how to solve their problems and meet their requirements with Couchbase Server and get them excited about NoSQL database technology

  • Develop and maintain an expert understanding of all Couchbase products and services. Establish and continuously update best practices for technical customer engagements in the fast-paced world of NoSQL 

  • Work closely with the sales team on account strategy and identifying additional opportunities in existing accounts including strategizing digital transformation initiatives for customers

  • Balance the workload of the Solutions Engineering team in concert with input from the Enterprise Sales Executive and Regional Sales Leaders

  • Mentor Solutions Engineering teams by providing hands-on technical guidance for building solutions such as microservices, Linux, Kubernetes, cloud deployments, databases, and messaging systems for pre-sales engagements 

  • Ensure the success of customer POC / Pilots through effective management of acceptance criteria and issue escalation/resolution

  • Support and participate with the Solutions Engineers in performing advanced technical presentations for customers, and prospects, remotely and in-person

  • Develop and deliver exceptional company/product presentations and demonstrations to manage, and maintain strong relationships with key customers

  • Work with all technical levels from managers, architects and developers and explain Couchbase Server and its uses

  • Be the technical product expert for customers and stay up to date on the NoSQL competitive landscape

  • Work with Product Management and Engineering to provide feedback from the field and represent the customer perspective as well as identify and write internal and external technical collateral

  • Establish, track, monitor and report on actionable metrics and KPIs for product adoption 

  • Represent Couchbase at conferences, industry, and sales events

Qualifications

  • 7+ years of experience serving in the capacity of a pre-sales engineer

  • Ability to teach other members of the team and effectively manage a team of highly skilled Sales Engineers 

  • Experience with traditional RDBMS including schema modeling, performance tuning and configuration

  • Proven ability to provide technical leadership to the account team and engineersHands-on administration and troubleshooting experience with x86 operating systems (Linux, Windows, Mac OS), networking and storage architectures

  • Familiarity with NoSQL databases or other distributed high-performance systems

  • Must be able to coordinate across various groups and functional teams

  • Ability to apply solutions, technology, and products to a business opportunity

  • Willingness to travel throughout the assigned region both by air and by car

Minimum Qualifications

  • Excellent communication and presentation skills with an ability to present technical solutions concisely to any audience

  • Experience engaging with developers and programming experience in at least one of the following: Java/.NET/PHP

  • Demonstrated passion for diving into technical issues and solving customer problems

  • Demonstrated critical thinking and advanced troubleshooting skills and qualities Ability to travel a minimum 25% of the time is required
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
REMOTE Senior Big Data Engineers
Surge  
aws python big data senior Dec 08 2019

SURGE is looking for smart, self-motivated, experienced, Senior Engineers who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

Senior Big Data Engineers, Hadoop, AWS, Python

Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

For immediate consideration, email resume with tech stack under each job and include your full name, cell phone number, email address and start date to: jobs@surgeforward.com

Share this job:
Business Development Director - Head of Americas Partners
executive saas big data cloud Dec 05 2019
Couchbase is looking for an experienced Business Development/Partner Executive to successfully recruit and manage new partners in North America and South America. 

Couchbase is building out our Business Development Team and has an exciting role for someone to build relationships with Route to Market Partners in the Americas. Successful candidates will have experience working in a company selling applications, middleware, database, data warehouse, data integration technology or big/fast data technologies. Experience with open source and SaaS or enterprise subscription software is also a key requirement. In addition, the ideal candidate will be passionate about recruiting and managing Partners to drive new revenue streams. You will have a demonstrated ability to think strategically and will help define and build the partner model at Couchbase and most importantly, be partner sales focused.

The Director – Head of Americas Partners leader needs to be adept at working with multiple organizations in the company to accomplish these activities.

In collaboration with Sales, Sales Enablement and Marketing, your responsibilities will include:

  • Drive the route to market partner strategy for Couchbase.
  • Identify the most important Partners in the Americas Region, with a focus on the United States.
  • Lead the expansion of the partner program and prioritization.
  • Establish contractual relationships with these partners.
  • Enable the partners to deliver Couchbase products and services to Enterprise Customers.
  • Develop joint marketing activities (such as webinars, conferences, meetups, lunch-and-learns) with partners to build pipeline.
  • Enable joint sales activities at the field level.
  • Work with the field sales team to close partner sourced and influenced deals.
  • Manage the partner relationships across all these activities.

Desired Skills and Experience:

  • 10+ years of business development (partner) experience
  • Experience recruiting and building out new partner channels in a growth stage private company
  • Experience working in a company selling applications, middleware, database, data warehouse, data integration technology or big/fast data technologies. Experience with open source and enterprise subscription software is also highly desirable.
  • Experience with Global Systems Integrators, ISVs, Regional System Integrators, VARs/Resellers
  • Enterprise software sales experience
  • Excellent writing and presentation skills
  • Strong project management skills with a focus on building new relationships. Ability to think strategically, develop tactics and execute
  • Ability to influence and identify champions, both internally and externally
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Software Engineer - .NET Platform Developer
Percona  
dot net java python scala php big data Dec 02 2019
If you like working with the developer community for an Engagement Database and being in the front lines of integration of our product into various technology stacks, this is for you.   This is your chance to disrupt a multi-billion-dollar industry, change how the world accesses information, and reinvent the way businesses deliver amazing customer experiences. As a Software Engineer in SDK and Connector engineering team, you’ll work on the developer interface to Couchbase Server for JVM platform languages including the Java SDK, future platforms like Scala and Kotlin and contribute to connectors and frameworks such as Apache Spark and Spring Data. In your daily work, you will help the developer community to innovate on top of our Engagement Database.  You will have one of those rare positions of working with a market leading product and an Open Source community of users and contributors. The skill set and expectations are…

Responsibilities

  • Take on key projects related to the development, enhancement and maintenance of Couchbase’s products built on the JVM platform core-io including the Java SDK and new platforms we add.  Create, enhance and maintain to other JVM related projects such as the Kotlin client, the Spring Data Connector and others.
  • Contribute to the creation, enhancement and maintenance of documentation and samples that demonstrate how Java based languages and platforms work with Couchbase.
  • Create, enhance and maintain various documentation artifacts designed to make it easy for developers and system architects to quickly become productive with Couchbase.
  • Maintain, nurture and enhance community contributions to the Couchbase community and forums from the overall Couchbase community.
  • Work with the growing community of developers who will want to know how to develop Java, Kotlin, Spring, .NET, Node.js, PHP, Python and higher level frameworks with applications built on Couchbase.

Qualifications

  • The right person for this role will be a self-motivated, independent, and highly productive individual, with ability to learn new technologies and become quickly proficient.
  • Must have a minimum of 5 years of software development experience in a professional software development organization.  Ideally, this would be working on platform level software.
  • Should be familiar with modern, reactive, asynchronous software development paradigms such as Reactor and Reactive Streams.
  • Should have experience with binary streaming wire protocols, such as those in Couchbase.  Experience with streaming protocols based on Apache Avro and data formats such as those in Apache Kafka would be good.
  • Should have familiarity with web application development beyond Spring Framework, such as in Play Framework or others.  The ideal candidate would have familiarity with web application or mobile integration development in at least one other platform such as .NET or Java.
  • Must be familiar with consuming and producing RESTful interfaces.  May be familiar with GraphQL interfaces as well.
  • Would ideally be able to demonstrate experience in large scale, distributed systems and understand the techniques involved in making these systems scale and perform.
  • Has the ability to work in a fast paced environment and to be an outstanding team player.
  • Familiarity with distributed networked server systems that run cross-platform on Linux and Windows is highly desired.
  • Experience with git SCM, and tools such as Atlassian, JIRA and Jenkins CI are also strongly desired.
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Senior Devops Engineer
python ruby docker aws senior devops Nov 13 2019

Senior DevOps Engineer (Contract)

New Context is a rapidly growing consulting company in the heart of downtown San Francisco. We specialize in Lean Security: an approach that leads organizations to build better, safer software through hands-on technical and management consulting. We are a group of engineers who live and breathe Agile Infrastructure, Systems Automation, Cloud Orchestration, and Information & Application Security.

As a New Context Senior DevOps Engineer, you will provide technical leadership with a hands-on approach. Our clients look to us to guide them to a solution that makes sense for them, and you should expect to provide thought leadership, design, and implement that solution.

Expect to heavily use Open Source software to take on challenges like delivery of highly secured containers, management of IoT devices or building Big Data ecosystems at petabyte scale and beyond. You will utilize our core methodologies - Agile, Lean, TDD and Pair Programming - along with your fluency in DevOps - to implement robust and reliable systems for our clients.

You will work with our clients and other New Context team members while working from the New Context office, at client sites, or from your home. We foster a tight-knit, highly-supportive environment where there are no stupid questions. Even if you may not know the answer immediately, you'll have the entire company supporting you via Slack, Zoom, or in-person. We also host a daily, all-company stand-up via Zoom, and a weekly company Retro, so you won't just be a name on an email.

At New Context, our core values are Humility, Integrity, Quality & Passion! Our employees live these values every single day.

Who you are:

  • A seasoned technologist with 5+ years work experience in a DevOps, SRE, or Continuous Integration role;
  • Experienced in Open Source web technologies, especially in the areas of highly-available, secure systems;
  • Accustomed to implementing cloud-based solutions (AWS, Google Cloud, Azure) with significant work experience in public cloud technologies;
  • Have developed production-quality applications in an Agile environment;
  • Fluent in one or more high-level languages, ideally Ruby and/or Python;
  • Familiar with Infrastructure as Code (IaC) and automated server provisioning technologies;
  • Experienced as a technical lead on technical projects;
  • An excellent communicator, experienced working with external clients and customers and able to communicate productively with customers to explain technical aspects and project status;
  • Able to think on your feet and learn quickly on-the-job in order to meet the expectations of our clients;
  • A great teammate and a creative and independent thinker.

Bonus points if you are:

  • Comfortable as a technically hands-on Project Manager;
  • Experienced managing teams;
  • Happy and effective in a consulting role;
  • Familiar with: TCP/IP, firewall policy design, social engineering, intrusion detection, code auditing, forensic analysis;
  • A believer in automated tests and their role in software engineering;
  • Able to translate complex concepts to business customers

Technology we use:

We tailor solutions to our customers. You might work on projects using any of the following technologies:

  • Automation: Chef, Puppet, Docker, Ansible, Salt, Terraform, Automated Testing
  • Containerization Ecosystem: Docker, Mesosphere, Rancher, CoreOS, Kubernete
  • Cloud & Virtualization: AWS, Google Compute Engine, OpenStack, Cloudstack, kvm, libvirt
  • Tools: Jenkins, Atlassian Suite, Pivotal Tracker, Vagrant, Git, Packer
  • Monitoring: SysDig, DataDog, AppDynamics, New Relic, Sentry, Nagios, Prometheus
  • Databases/Datastores: Cassandra, Hadoop, Redis, postgres, MySQL
  • Security: Compliance standards, Application Security, Firewalls, OSSEC, Hashicorp Vault
  • Languages: Ruby, Python, Go, JavaScript

All applicants must be authorized to work in the U.S. We will not sponsor visas for this position.

We are committed to equal-employment principles, and we recognize the value of committed employees who feel they are being treated in an equitable and professional manner. We are passionate about finding ways to attract, develop and retain the talent and unique viewpoints needed to meet business objectives, and to recruit and employ highly qualified individuals representing the diverse communities in which we live, because we believe that this diversity results in conversations which stimulate new and innovative ideas.

Employment policies and decisions on employment and promotion are based on merit, qualifications, performance, and business needs. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Share this job:
DevOps Engineer
docker devops big data cloud backend design Nov 09 2019

We are needing a DevOps Engineer to help us build/maintain a data analytics platform for our end client in the cyber security space. 

You will need the following:

  • Strong programming skills, with expertise in multiple implementation languages/frameworks including a subset of Google Cloud Platform, Terraform, Docker with delivery background and backend implementations
  • Familiarity with large-scale, big data, and streaming data technologies, as well as exposure to a variety of structured (Postgres, MySQL) and unstructured data sources (Elastic, Kafka, and the Hadoop ecosystem) as implemented at Internet-scale.
  • Experience writing and optimizing streaming and batch analytics.
  • Experience with Agile frameworks, secure software design, test-driven development, and modern, container-delivered code deployment in a cloud-based DevOps environment.

The ideal candidate will be either US based or Tel Aviv, Israel. This is a Full Time opportunity initially for a 3 month contract. 

What we look for is someone with a great attitude, experience with turning ideas into fully-fledged products. If this profile sounds like you, send us through your application! 

Share this job:
Data Engineer-Remote
python scala big data aws design healthcare Nov 08 2019

Description

SemanticBits is looking for a talented Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

SemanticBits is a leading company specializing in the design and development of digital health services, and the work we do is just as unique as the culture we’ve created. We develop cutting-edge solutions to complex problems for commercial, academic, and government organizations. The systems we develop are used in finding cures for deadly diseases, improving the quality of healthcare delivered to millions of people, and revolutionizing the healthcare industry on a nationwide scale. There is a meaningful connection between our work and the real people who benefit from it; and, as such, we create an environment in which new ideas and innovative strategies are encouraged. We are an established company with the mindset of a startup and we feel confident that we offer an employment experience unlike any other and that we set our employees up for professional success every day.

Requirements

  • Bachelor’s degree in computer science (or related) and two to four years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Scala, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken English
  • Self-driven problem solver

Benefits

  • Generous base salary
  • Three weeks of PTO
  • Excellent health benefits program (Medical, dental and vision)
  • Education and conference reimbursement
  • 401k retirement plan. We contribute 3% of base salary irrespective of employee's contribution
  • 100% paid short-term and long-term disability
  • 100% paid life insurance
  • Flexible Spending Account (FSA)
  • Casual working environment
  • Flexible working hours
  • Self-driven problem solver

SemanticBits, LLC is an equal opportunity, affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, or any other characteristic protected by law. We are also a veteran-friendly employer.

Share this job:
Solutions Architect
phData  
scala java big data cloud aws testing Nov 05 2019

If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for three (3) consecutive years.   

As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and a long term incentive plan for employees. 

As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

  • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

  • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

  • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

  • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

  • Design and implement streaming, data lake, and analytics big data solutions


  • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines


  • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths


  • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)


  • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software


  • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

  • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

Technical Leadership Qualifications


  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics


  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  


  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc


  • Expert programming experience in Java, Scala, or other statically typed programming language


  • Ability to learn new technologies in a quickly changing field


  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries


  • Excellent communication skills including proven experience working with key stakeholders and customers

Leadership


  • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics


  • Experience scoping activities on large scale, complex technology infrastructure projects


  • Customer relationship management including project escalations, and participating in executive steering meetings

  • Coaching and mentoring data or software engineers 
Share this job:
Senior Software Engineer
aws devops java python javascript machine learning Nov 05 2019

DESCRIPTION:
Authority Partners is hiring experienced, passionate and self-driven Senior Software Engineer/Data Engineer to join our strong development teams. Make sure you don’t miss this call and the chance to join a team of top-notch players working with the most modern technologies. You will take on complex problems in a big data world and make sense of it through advanced data engineering and rendering tools, undertaking the full software lifecycle of design, implementation, and integration. Further, you will use leading-edge cloud computing technology, leverage Amazon Web Services to build AI infrastructure and redefine data interaction. If we sparked your interest and you are up for the challenge, read on and apply!

RESPONSIBILITIES:

  • Design and develop SDK framework to integrate AI product in the flow of work
  • Develop, improve, and maintain API and SDK to support access across any system
  • Produce unit, functional, integration and interoperability tests, including automating tests when possible
  • Collaborate with product team to translate requirements into future product development
  • Extensive experience with working with APIs
  • Leverage machine learning techniques to build systems which process and derive insights from billions of data points every day

REQUIREMENTS:

  • 5+ years of proven work experience in software development
  • Strong knowledge of JavaScript and at least one UI library/framework (e.g. React, Angular)
  • Minimum two years of experience with Amazon Web Services (Lambda, EC2, RDS, Elastic Beanstalk, S3, etc.), DevOps and CI/CD
  • Working knowledge of Python, Java, and/or Scala
  • Understanding of the technology and approaches for knowledge representation and semantic reasoning, e.g., semantic web technologies, graph databases, or deep relational data modeling
  • Knowledge of backend coding, API development, and database technologies
  • Understanding of data flows, data architecture, ETL and processing of structured and unstructured data
  • Experience with distributed software suites such as Apache Hadoop, Spark, Spark Streaming, Kafka, Storm, Zookeeper, Flume, Presto, Pig, Hive, MapReduce
  • Experience with agile (e.g., Scrum) or lean (e.g., Kanban) methodologies and practices
  • Minimum five years of experience building production quality cloud products
  • Proven leadership skills including: mentoring, coaching, and collaboration; able to inspire or mentor junior and senior team members.
  • Ability to design, architect and quickly complete projects with minimal supervision and direction
  • You have a passion for keeping up with the fast-emerging big data analytics technical landscape.
  • Experience developing and managing RESTful API applications with demonstrable production-scale experience
  • Experience developing cross-platform technologies and packaging as an SDK/library
  • Good understanding of system architecture and design and experience with large distributed systems
  • Demonstrated delivery of large-scale, initially-ambiguous projects
  • Expert knowledge in Machine Learning (Natural Language Processing ,VIsion , Classification , Search)
  • Knowledge of the software architecture and designing of systems at the enterprise level

EDUCATION AND EXPERIENCE:
Bachelor’s, Master’s or Ph.D. in Computer Science, Engineering, Mathematics or Physics, or equivalent industry experience

Share this job:
Full Stack Engineer
php node-js python full stack machine learning big data Oct 31 2019

We are looking for a Full Stack Engineer to join our team with a “take no prisoners” attitude to join our team. Why is Engineering at Givelify Different: Moonshots are our norm. Our product impacts real people on the ground. We build with passion. High standard of engineering quality. Solve unique scalability challenges. You will have the ability to touch all aspects of Engineering and Product Development.

Key Responsibilities:

  • Engineering efforts in building out new features that directly impact the relationship between causes and their supporters.
  • Engineer highly available, scalable software and data architectures.
  • Build out real-time analytics and reporting dashboards that are optimized for big data and synced in real time across multiple clients.
  • Work with a small team of experienced and highly talented engineers in a CI/CD environment with cutting-edge technologies.
  • Work with our front-end team to fine tune our REST APIs.
  • Work with our DevOps team to ensure a scalable, secure, redundant, distributed production environment system.
  • Implement mathematical and machine learning algorithms under guidance from Givelify’s data scientists.

Our evolving stack:

  • Development: PHP (Yii), Node.js, Angular, Python, Spark,
  • CI/CD: Jenkins, Github, Jira, Selenium, PHPUnit

Ideal Qualifications:

  • Experience in building large scale distributed web applications.
  • Excellent object-oriented development skills.
  • Deep understanding of graph design principles with a strong background full-stack development environments.
  • Optimization of databases for big data environment, structuring of queries for fast responses.
  • Past experience in significant cross-functional engineering efforts.
  • Experience with cloud computing platforms (e.g. Amazon AWS, Microsoft Azure, Google App Engine, etc.).
  • Deep understanding of big O notation and algorithm complexity analysis.
  • Excellent communication and interpersonal skills.
  • Experience leading engineering/development teams.
  • A Bachelor’s with 5+ years of experience or a M.S./Ph.D. with relevant academic/research experience in Computer Science, Computer Engineering, Mathematics, Physics, or equivalent degree.
Share this job:
REMOTE Sr. Scala Engineer- Redis and Postgres REQUIRED, Sorry, No Visas
Surge  
scala redis postgresql big data cloud aws Oct 31 2019

Surge Forward is looking for smart, self-motivated, experienced, senior-level consultants who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

TECHNICAL REQUIREMENTS:

EST Hours Required, Must live in the US or Canada to be considered. Sorry, NO Visas.

• Proficiency with Scala
• Proficiency with PostgreSQL, Redis
• Experience in data processing formats Like Json and Parquet
• Familiar with SBT and Docker
• Knowledge or experience about Restful API
• Proficiency in source code management, software development, Unix tools and terminal
• Foundational knowledge of algorithms, networking, concurrency, file systems

NICE TO HAVE:
• Experience with software design patterns
• Understanding of Kubernetes
• Experience with business intelligence and digital analytics

QUALIFICATIONS:
• Experience and capability to design, specify and build a data API
• Solid understanding of large system design, streaming big data and performance trade-offs
• Deep experience with SQL and multiple Relational Database Management Systems with non-trivial databases.
• Demonstrated expertise in AWS cloud computing, and understanding of data management best practices
• Strong communication, verbal and written skills
• Keen to take the leadership of the expertise domains
• Solution-oriented, able to implement prototypes with new tools quickly
• 3-5+ years of experience in software engineering and data analytics
• Bachelor’s/Master’s degree in computer science, or a related field

RESPONSIBILITIES:
• Develop and maintain software components to enhance, transform and model raw data
• Develop and maintain a highly available and scalable real-time API
• Investigate and resolve performance bottlenecks, data quality issues and automation failures
• Implement and manage continuous delivery systems and methodologies
• Implement tests and validation processes to maintain the code and data quality


Only candidates located in the immediate area can be considered at this time. Sorry, No Visas.

Resume must include the tech stack under each job directly on the resume in order to be considered.

For immediate consideration, email resume and include your cell phone number and start date: jobs@surgeforward.com

Share this job:
Solutions Architect to work with Apache Flink
java big data design Oct 29 2019

 Ververica is currently building a new team of Solution Architects in the US. You’ll be part of a new and fast-growing team helping customers having a great experience using our products and Apache Flink. The role will sit at the forefront of one of the most significant paradigm shifts in information processing and real-time architectures in recent history - stream processing - which sets the foundation to transform companies and industries for the on-demand services era.

You will work with engineering teams inside of our customers to build the best possible stream processing architecture for their use cases. This includes reviewing their architecture, giving guidance on how they design their Flink applications, and helping them take their first steps with our products.

Some of the customer engagements will be carried out remotely via phone and screen share, but the position also includes traveling to customers to help them onsite.

And when you’re not working with our customers, there are plenty of opportunities at Ververica to learn more about Flink, contribute to the products and open source projects, and help evangelizing Apache Flink to users around the world.

What you’ll do all day:

  • Use your experience to solve challenging data engineering and stream processing problems for our customers
  • Meet with customers, understand their requirements, and help guide them towards best-of-breed architectures
  • Provide guidance and coding assistance during the implementation phase and make sure projects end in successful production deployments
  • Become an Apache Flink and stream processing expert

You will love this job if you …

  • ... are experienced in building and operating solutions using distributed data processing systems on large scale production environments (e.g. Hadoop, Kafka, Flink, Spark)
  • … are fluent in Java and/or Scala
  • … love to spend the whole day talking about Big Data technologies
  • … have great English skills and like talking to customers
  • … like traveling and visiting new places

What we offer:

  • Competitive salary 
  • Tech gear of your choice
  • International team environment (10 nationalities so far)
  • Flexible working arrangements (home office, flexible working hours)
Share this job:
Data Engineer
node-js python aws big data cloud senior Oct 27 2019

The role:

Do you thrive on simplifying complicated systems and creating structure? We're looking for a data engineer with a passion for designing data platforms and solutions for businesses to access the information they need easily.

You'll have a proven track record in the architecture and optimisation of data systems and building the data pipeline from the bottom up. You must be able to select appropriate technology to match the needs of the business, while taking into account their current applications, standards and assets.

As a senior programmer, you'll have a methodical approach to streamlining data sources and be able to communicate effectively across teams to solve their challenges. In return, we'll offer a competitive rate and some amazing perks, plus we're a fun team to work with, we promise.

Responsibilities:

  • Plan and implement an optimal data pipeline architecture
  • Make sense of complex data sets to fulfil business requirements
  • Identify and implement process improvements, automate manual processes, optimise data delivery, and design the infrastructure for scalability
  • Enable optimal extraction, transformation, and loading of data from a range of data sources using SQL and big data technologies
  • Build analytics tools from the data pipeline to provide actionable business performance metrics such as for customer acquisition and operational efficiency
  • Keep data secure
  • Strive for innovative functionality of data systems by working with data and analytics experts

Skills & Competencies:

The successful candidate will have:

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL)
  • Programming languages including Python, possibly Node
  • Working knowledge of a variety of databases (desirable)
  • Experience designing, building and optimizing 'big data' data pipelines, architectures and data sets
  • Strong analytic skills related to working with unstructured datasets
  • Experience building processes supporting data transformation, data structures, metadata, dependency and workload management
  • A successful history of manipulating, processing and extracting value from large disconnected datasets
  • Strong project management and organisational skills
  • 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another similar field.

Desirable experience:

  • Google Cloud Platform (GCP), AWS, Apache Spark, Hadoop
  • Devops, CI/CD and infrastructure as code
  • Open source data visualisation platforms such as Plotly, Tableau, Qlikview, Fusioncharts, Sisense, Chartio or similar
Share this job:
Data Engineer
java python aws php data science big data Oct 24 2019

This position can be remote, but US based candidates only.

About Us:

Dealer Inspire (DI) is a leading disruptor in the automotive industry through our innovative culture, legendary service, and kick-ass website, technology, and marketing solutions. Our mission is to future-proof local dealerships by building the essential, mobile-first platform that makes automotive retail faster, easier, and smarter for both shoppers and dealers. Headquartered in Naperville, IL, our team of nearly 600 work friends are spread across the United States and Canada, pushing the boundaries and getting **** done every day, together.

DI offers an inclusive environment that celebrates collaboration and thinking differently to solve the challenges our clients face. Our shared success continues to lead to rapid growth and positive change, which opens up opportunities to advance your career to the next level by working with passionate, creative people across skill sets. If you want to be challenged, learn every day, and work as a team with some of the best in the industry, we want to meet you. Apply today!

Want to learn more about who we are? Check us out here!

Job Description: 
Dealer Inspire is changing the way car dealerships do business through data. We are assembling a team of engineers and data scientists to help build the next generation distributed computing platform to support data driven analytics and predictive modeling.

We are looking for a Data Engineer to join the team and play a critical role in the design and implementing of sophisticated data pipelines and real time analytics streams that serve as the foundation of our data science platform. Candidates should have the following qualifications

Required Experience

  • 2-5 years experience as a data engineer in a professional setting
  • Knowledge of the ETL process and patterns of periodic and real time data pipelines
  • Experience with data types and data transfer between platforms
  • Proficiency with Python and related libraries to support the ETL process
  • Working knowledge of SQL
  • Experience with linux based systems console (bash, etc.)
  • Knowledge of cloud based AWS resources such as EC2, S3, and RDS
  • Able to work closely with data scientists on the demand side
  • Able to work closely with domain experts and data source owners on the supply side
  • An ability to build a data pipeline monitoring system with robust, scalable dashboards and alerts for 24/7 operations.

Preferred Experience

  • College degree in a technical area (Computer Science, Information Technology, Mathematics or Statistics) 
  • Experience with Apache Kafka, Spark, Ignite and/or other big data tools 
  • Experience with Java Script, Node.js, PHP and other web technologies.
  • Working knowledge of Java or Scala
  • Familiarity with tools such as Packer, Terraform, and CloudFormation 

What we are looking for in a candidate:

  • Experience with data engineering, Python and SQL
  • Willingness to learn new technologies and a whatever-it-takes attitude towards building the best possible data science platform
  • A person who loves data and all things data related, AKA a self described data geek
  • Enthusiasm and a “get it done” attitude!

Perks:

  • Health Insurance with BCBS, Delta Dental (Orthodontics coverage available), Eye Med Vision
  • 401k plan with company match
  • Tuition Reimbursement
  • 13 days paid time off, parental leave, and selected paid holidays
  • Life and Disability Insurance
  • Subsidized gym membership
  • Subsidized internet access for your home
  • Peer-to-Peer Bonus program
  • Work from home Fridays
  • Weekly in-office yoga classes
  • Fully stocked kitchen and refrigerator

*Not a complete, detailed list. Benefits have terms and requirements before employees are eligible.

Share this job:
Lead Software Engineer
scala aws python machine learning big data junior Oct 24 2019

X-Mode Social, Inc. is looking for a full-time lead software engineer to work on X-Mode's data platform and join our rapidly growing team. For this position, you can work in either remotely anywhere in the U.S. or in our Reston, VA headquarters. Our technical staff is scattered across the U.S, so you'll need to be comfortable working remotely. We often use videoconferencing tools (like Slack, Google Meet) to coordinate, as well as Jira for tasking, and Bitbucket for source control. We work in short sprints, and we'll count on you to provide estimates for tasks to be completed and delivered. 

WHAT YOU'LL DO:

  • Use big data technologies, processing frameworks, and platforms to solve complex problems related to location
  • Build, improve, and maintain data pipelines that ingest billions of data points on a daily basis
  • Efficiently query data and provide data sets to help Sales and Client Success teams' with any data evaluation requests
  • Ensure high data quality through analysis, testing, and usage of machine learning algorithms

WHO YOU ARE:

  • 3-5+ years of Spark and Scala experience
  • Experience working with very large databases and batch processing datasets with hundreds of millions of records
  • Experience with Hadoop ecosystem, e.g. Spark, Hive, or Presto/Athena
  • Real-time streaming with Kinesis, Kafka or similar libraries
  • 4+ years working with SQL and relational databases
  • 3+ years working in Amazon Web Services (AWS)
  • A self-motivated learner who is willing to self-teach
  • Willing to mentor junior developers
  • Self-starter who can maintain a team-centered outlook
  • BONUS: Experience with Python, Machine Learning
  • BONUS: GIS/Geospatial tools/analysis and any past experience with geolocation data

WHAT WE OFFER:

  • Cool people, solving cool problems.
  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • We value your input. This is a chance to get in on the "ground floor" of a growing company
Share this job:
REMOTE Sr. Big Data/AWS Opening
Surge  
python aws big data Oct 22 2019

We are hiring smart, self-motivated, experienced, senior-level consultants who enjoy the freedom of telecommuting and flexible schedules, to work as long-term, consistent (40 hrs/week) independent contractors on a variety of software development projects.

REQUIRED:

Hiring Sr. Big Data/AWS Engineers and Architects.

Must be located in the US or Canada to be considered for this role. Sorry, No Visas.

RESUME MUST INCLUDE: Tech stack by each job to be considered, first and last name on the resume, home location and contact information. 

For immediate consideration, email resume with tech stack under each job where you have worked and please include home location and cell phone number. 

Share this job:
Data Engineer
mysql big data cloud design Oct 20 2019

Who are we looking for:

We are looking for a savvy Data Engineer to join our growing tech team.

You will support our software developers, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.

The right candidate will be excited by the prospect of optimizing or even re-designing our company’s data architecture to support our next generation of products and data initiatives.

Roles and Responsibilities:

  • you will build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and other big data technologies
  • you will design and model data structures to help analyzing our business and technical data
  • you will support existing processes running in production
  • you will work together with people from other key areas to assist with data-related technical issues and support their data infrastructure needs

Skills & Requirements

  • knowledge in relevant engineering best practices, data management fundamentals, data storage principles, and be current with recent advances in distributed systems as it pertains to data storage and computing
  • 2+ years of experience in designing, building and maintaining data architecture(s) and infrastructure(s), both relational and non-relational
  • 2+ years of maintaining data warehouse systems and working on large scale data transformation using SQL, Hadoop, Hive, or other Big Data technologies; experience with ETL tools is a plus
  • 2+ years of data modeling experience, and able to use data models to improve the performance of software services
  • experience with Cloud Based Solution (AWS Redshift, GCP Big Query) and programming language (Python, Java) is a plus
  • experience communicating with colleagues from engineering, analytics, and business backgrounds
  • degree in Engineering, Math, Statistics, Computer Science, or related discipline or equivalent experience is a plus.
  • be able to legally work in Europe (you are the holder of a EU Passport or you are the holder of EU residency permit or you are the holder of a Schengen Work Visa)
Share this job:
Senior Software Engineer
Nagarro  
java senior python big data cloud Oct 19 2019

Required experience and skills: 

  • Expertise in Java or Scala
  • Familiarity with cluster computing technologies such as Apache Spark or Hadoop MapReduce
  • Familiarity with relational and big data such as Postgres, HDFS, Apache Kudu and similar technologies
  • Strong skills in analytic computing and algorithms
  • Strong mathematical background, including statistics and numerical analysis
  • Knowledge of advanced programming concepts such as memory management, files & handles, multi-threading and operating systems.
  • Passion for finding and solving problems
  • Excellent communication skills, proven ability to convey complex ideas to others in a concise and clear manner 

Desirable experience and skills: 

  • Familiarity with scripting languages such as Python or R
  • Experience in performance measurement, bottleneck analysis, and resource usage monitoring
  • Familiarity with probabilistic and stochastic computational techniques
  • Experience with data access and computing in highly distributed cloud systems
  • Prior history with agile development
Share this job:
Solutions Engineer, Analytics
big data design Oct 17 2019

Company Description

CoEnterprise is an award-winning B2B software and professional services company headquartered in New York City. Founded in 2010, CoEnterprise delivers Supply Chain and Business Analytics solutions and services that transform how companies connect and do business. CoEnterprise approaches each relationship and engagement from the perspective of three core values: collaboration, ownership, and excellence. We value collaboration with both our partners and clients in order to present the best possible outcome for our customers. Our vow to accept ownership ensures that our entire staff takes pride in our work and it is our commitment to excellence that ensures that this work is at the highest standard possible.

Job Description

Responsibilities:

  • Effectively articulate technology and product positioning to both business and technical audiences
  • Lead strategic technical initiatives throughout the sales process and demonstrate CoEnterprise’s technical advantages.
  • Pursue the technical sales process with a coordinated focus on solutions development through discovery and requirements gathering, personalized demo, validation and documented design across assigned product groups.
  • Manage and interpret customer requirements and use astute questioning skills to understand, anticipate and match CoEnterprise’s customer technical capabilities to the products.
  • Create and deliver compelling, customer centric technical presentations and demonstrations by connecting technical features to customer business capabilities and drivers.
  • Identify all technical challenges of the customer to assure complete customer satisfaction through all stages of the sales process.
  • Seamless collaboration with the sales account managers and functional consultants fostering an integrated team approach to customer engagement.
  • Research new technologies, tools and methodologies as they emerge that may be applicable
  • Be the customer advocate and liaison for product management and development. 
  • Act as an integral team member working to achieve regional and team sales goals. Must be able to establish and maintain strong relationships throughout the sales cycle

Qualifications

  • Tableau experience is a plus
  • Ability to travel 50-75% of the time
  • Knowledge and experience selling or reselling analytics software for Big Data solutions is a plus
  • Proven track record in Big Data & Analytics software and/or services sales
  • Excellent organizational and time management skills, with the ability to juggle multiple opportunities and relationships
  • Customer service-minded, focused on addressing needs and fulfilling commitments
  • Self-motivated and able to work both independently and as part of a team
  • Ability to develop and manage professional networks with partners, prospects, and clients
  • Exceptional communication skills
  • Strong sales closing and relationship management skills
  • Exceptional presentation and demo capabilities
  • Experience customizing and building the storyboard for demos and POC
  • Demonstrated success in achieving strategic deal wins
  • Ability to manage multiple, complex sales opportunities simultaneously
  • Ability to communicate from C-level executives down to Agent/Front Line level employees
  • Bachelor’s Degree
Share this job:
Senior Product Manager - Elastic Cloud
Elastic  
cloud saas senior big data wordpress aws Oct 16 2019

At Elastic, we have a simple goal: to solve the world's data problems with products that delight and inspire. As the company behind the popular open source projects — Elasticsearch, Kibana, Logstash, and Beats — we help people around the world do great things with their data. From stock quotes to real time Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. The Elastic family unites employees across 30+ countries into one coherent team, while the broader community spans across over 100 countries.

We are looking for a Sr. Product Manager to drive the success of our Cloud products. Elastic Cloud is our hosted Elasticsearch and Kibana service and is the fastest growing product of Elastic and we need your help in making it even better and bringing it to its fullest potential. If you're technical, have experience with developer-targeted product and are looking to join and dynamic and growing team, this job is for you!

Responsibilities:

  • Gather requirements from customers, sales and Elastic’s own operations team and help with ongoing prioritization of features
  • Define KPIs to collect and analyze. Use this data to identify trends and make recommendations on how the platform can be optimized and improved, including (but not limited to) UX, pricing, packaging and product offerings
  • Support the sales team on customer engagements and take part in sales enablement activities within Elastic
  • Provide engineering with inputs on new and existing features and make sure they’re aligned with customers needs.
  • Work with the marketing and product marketing teams on messaging, positioning and launch activities.

Requirements:

  • Solid understanding and familiarity with cloud technologies (such as AWS, Azure, and GCP) and developer-focused products
  • Technical aptitude and attention to detail
  • Excellent spoken and written communication skills
  • 2+ years of proven track record in product management at a SaaS software company

Preferred skills and experience:

  • BA or a higher degree in a technical field (e.g. EE, CS)
  • Experience with Open Source software and/or commercial open source companies.
  • Experience building and/or operating a SaaS product
  • Experience with search, logging, and analytics products in the big data ecosystem (Elasticsearch, Solr, Hadoop, MongoDB, Spark etc)

Additional Information

We're looking to hire team members invested in realising the goal of making real-time data exploration easy and available to anyone. As a distributed company, we believe that diversity drives our vibe! Whether you're looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.

  • Competitive pay based on the work you do here and not your previous salary
  • Equity
  • Global minimum of 16 weeks of paid in full parental leave (moms & dads)
  • Generous vacation time and one week of volunteer time off
  • Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.

Elastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.

Share this job:
Senior Data Engineer
python senior data science big data cloud Oct 16 2019

PowerInbox is looking for a Senior Data Engineer

*This job is fully remote (only in the USA, though) with the option to work from out NYC office. We keep EST work hours*

If you join us, what will you do?

Build and maintain a real-time big data pipeline and reporting system for powerinbox. The data pipeline will feed our AI and analytics platform. The reporting system will automatically distribute reports to recipients on a configurable schedule. As needed, you will provide special reports as requested by sales and operations teams. This role offers opportunities to work with big data, data science, cloud computing, and the latest software technology.

Specific Goals

  • Build and maintain a data pipeline for powerinbox machine learning.
  • Assist with the development of a data warehouse on which reports are derived.
  • Process 8 billion event transactions each month.
  • Assure data is captured and stored without loss.
  • Write code to provide reports for powerinbox.
  • Write a system that will run reports on a configurable schedule.
  • Respond to ad-hoc requests for information.

In order to be great at your job,

You Are

A fast learner; have great analytical skills; relentless and persistence in accomplishing goals; enthusiastic with an infectious personality.

You Work

Efficiently; with flexibility; proactively; with attention to detail; to high standards.

Together We

Emphasize honesty and integrity; require teamwork; have open communication; follow-through on commitments; stay calm under pressure.

You Have

  • Four to six years experience with Python or R
  • Three or more years experience developing and deploying software on Linux
  • Three or more years working with SQL
  • At least two years experience providing data analysis
  • Professional experience with data science knowledge
  • Working knowledge of BI tools and software

This is extra, but if you have it, it will make us happy

  • Experience working remotely
  • Knowledge of/interest in the digital and AdTech landscape
  • Experience working with big data

About PowerInbox

Why We Are

We believe that digital messaging is not meant to be stationary and static but relevant and hyper-targeted, filled with dynamic content.


Who We Are

We are a digital monetization startup ecosystem that is always open to new talent


What We Are

We at PowerInbox boost your revenue and brand engagement through real-time advertising, and native ad displays. 


If interested please send your resume to hr@powerinbox.com

Share this job: