Remote scala Jobs

Last Week

Sr. Data Scientist
python scala data science machine learning big data testing Aug 06

Senior Data Scientist @ NinthDecimal.

NinthDecimal (www.ninthdecimal.com) provides location-based intelligence to help advertisers plan, manage, measure, and optimize multi-platform cross-media campaigns to drive customer and revenue growth. As an industry leader in the AdTech & MarTech space, NinthDecimal delivers best-in-class measurement, insights, and analytics by deploying patented big data methodologies on a cutting-edge technology platform.

Our LocationGraph™ platform processes data on a massive scale, converting tens of billions of signals per day into accurate and actionable insights for our clients. We provide location-based intelligence services for top brands across industry verticals including retail, travel, leisure & entertainment, fast food & casual dining, telecommunications, and automotive.

As a member of the Data Science team, you’ll be responsible for developing statistical and machine-learning models that deliver accurate and robust measurement metrics of interest to our advertising clients. You will work closely with other data scientists, data analysts, product & engineering teams, and other business units. This is a great opportunity to work with real world data at scale and to help define and shape the measurement standards in a very dynamic and evolving industry.

Responsibilities:

  • Develop & deploy statistical & machine learning models at scale to create high quality disruptive products
  • Contribute to our growing portfolio of data science and technology patents
  • Establish robust processes to insure the accuracy, stability, reproducibility, and overall quality of all data, algorithms, and the results they produce.
  • Represent Data Science team in product and roadmap design sessions
  • Participate in building reliable QA processes for both data and results
  • Collaborate on key architectural decisions and design considerations
  • Contribute to and promote good software engineering practices across the Engineering Department.
  • Understand the current data sets and models and provide thought leadership by discovering new ways to enrich and use our massive data assets

Qualifications Required:

  • A true passion for data, data quality, research and a solid data science approach
  • Masters or Ph.D.in Statistics, Economics, Operations Research, or similar quantitative field
  • At least 5 to 10 years of professional experience with clear career progression and demonstrated success at developing models that drive business value
  • Excellent communication skills and the ability to present methodologies and findings to audiences with varying technical background
  • Solid understanding of probability and statistics
  • Solid understanding of research design, A/B and test-vs-control statistical testing frameworks
  • Solid understanding of unsupervised and supervised machine learning approaches including clustering and classification techniques.
  • Experience in building Machine Learning models (GLM, SVM, Bayesian Methods, Tree Based Methods, Neural Networks)
  • Solid understanding of how to assess the quality of machine learning models – including the ability to tune and optimize models and to diagnose and correct problems.
  • Experience working with multiple data types including numerical, categorical, and count data.
  • A driven leader, able to manage competing priorities and drive projects forward in a dynamic and fast paced business environment.
  • Experienced/Advanced programmer in Scala, Python, or similar programming languages
  • Experienced/Advanced Programmer in Spark, SQL, and Hadoop
  • Experience in developing algorithms and building models based on TB-scale data
  • Familiarity with the digital media / advertising industry is a big plus
Share this job:

This Month

Software Engineer
indi  
python aws scala rest java cloud Jul 24

Background

At numo, we incubate new “fintech” companies.  Our flagship product, indi, is growing rapidly and we are seeking a full stack software engineers to join our development team.

The Job

Here’s what you’ll be working on:

indi is one of a kind digital banking product targeted at self-employed customers who are part of the rapidly growing gig-economy space. We are building a product to address the challenges faced by those customers in a unique way.

Job Responsibilities:

  • Be an integral part of the development team across our technology stack which includes Scala, Python, Flutter, Dart and is hosted on AWS.
  • Be willing to learn new technologies. If you are not familiar with some or all of our tech stack, we will be happy to help you ramp up.
  • Focus on creating software that is scalable, robust, testable, easy to maintain and easily deployed

We are looking for:

  • Real world experience building products. Ideally, at least 5+ years. 
  • Expertise in modern architectures (e.g., micro services, event-based, map-reduce, etc.)
  • Experience with deploying and developing for cloud environments (AWS)
  • Familiarity with modern open source thinking and tools (git, continuous builds, continuous deployment, containers, dev ops, Jenkins, Docker)
  • Desire to build and be part of a fun, high-functioning team
  • A computer science degree is desired, but not required if you have real-world experience

What numo offers

  • Competitive salary
  • Opportunity to own equity in indi
  • Cool office space at East Liberty
  • Great benefits
Share this job:
Solutions Architect - Toronto
java python scala big data linux cloud Jul 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Toronto with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:

This Year

Backend Software Engineer, Enterprise & Migrations
 
backend java python javascript scala saas Jul 01
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a backend software engineer to join our Enterprise and Migrations team. You’ll be joining a team focused on building features for our enterprise-scale customers to enable better governance, trust, and security. Our team has a direct impact on the growth of Atlassian and is the proud owner of the Atlassian Access product. We are enabling cross-product experiences, and are committed to removing all blockers for adoption of cloud for enterprise customers.

More about you
As a backend software engineer on this team, you will work with a talented team of Product Managers, Designers, and Architects to build application-layer services encompassing backend development, monitoring, scaling and optimizing to make the administration of Atlassian products simple at Enterprise scale.

You will be empowered to drive innovation by coming up with new and exciting ideas to creatively solve issues, as well as actively look for opportunities to improve the design, interface, and architecture of Atlassian's products on the cloud.

On your first day, we'll expect you to have:

  • Bachelor's degree in Engineering, Computer Science, or equivalent
  • Experience crafting and implementing highly scalable and performant RESTful micro-services
  • Proficiency in any modern object-oriented programming language (e.g., Java, Scala, Python, Javascript, etc.)
  • Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra)
  • Real passion for collaboration and strong interpersonal and communication skills
  • Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure)
  • Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality

It’s great, but not required, if you have:

  • Experience using AWS, Kubernetes and Docker containers
  • Familiarity with GraphQL, web application development and JavaScript frameworks (React, JQuery, Angular)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Solutions Architect - West Coast
java python scala big data linux cloud Jul 01
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Anywhere in West Coast, USA
You will be based in LOCATION, with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Product Security Engineer
 
java python scala testing Jun 22
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.


JOB DUTIES:    

ENSURE SECURITY (CONFIDENTIALITY, INTEGRITY, AND AVAILABILITY) OF COMPANY'S INTERNAL SOFTWARE SERVICES AND EXTERNAL SOFTWARE PRODUCTS. PRACTICE THREAT MODELING, ARCHITECTURE/DESIGN REVIEW, STATIC ANALYSIS, AND PENETRATION TESTING TO ACHIEVE THESE OBJECTIVES. DESIGN REVIEWS, CODE REVIEWS, AND THREAT MODELING. WORK CLOSELY WITH DEVELOPMENT TEAMS AT EACH STAGE OF THE SOFTWARE DEVELOPMENT LIFECYCLE TO INCORPORATE SECURE DESIGN, DELIVER SECURE CODE, IDENTIFY VULNERABILITIES, AND DELIVER REMEDIATION. SERVE AS SUBJECT MATTER EXPERT FOR ANY CLIENT COMPANY WITH SECURITY QUESTIONS. WORK WITH COMPANY'S SUPPORT TEAMS TO ADDRESS CUSTOMER SECURITY CONCERNS AND REPORTS. WRITE AUTOMATION TO CONTINUOUSLY TEST COMPANY'S PRODUCTS/INFRASTRUCTURE AND IDENTIFY NEW VULNERABILITIES AND TO ALLOW THE SECURITY TEAM TO FUNCTION MORE EFFICIENTLY. COLLABORATE CLOSELY WITH ALL ENGINEERING GROUPS. WORK IN CONJUNCTION WITH THE SECURITY INTELLIGENCE TEAM TO INVESTIGATE THE ROOT CAUSE OF SECURITY INCIDENTS. RECEIVE, TRIAGE, AND RESPOND TO VULNERABILITY REPORTS FROM THE PUBLIC AND VIA COMPANY'S BUG BOUNTY. WRITE NEW CODE PRIMARILY UTILIZING JAVA OR PYTHON TO PRODUCE UNIQUE AND PROPRIETARY SOFTWARE. PERFORM SOURCE CODE AUDITING FOR JAVA, SCALA, AND PYTHON LANGUAGES, COMPLETE WEB SCANNING, AND UTILIZE CUSTOM AND COMMERCIAL TOOLS. CONDUCT INDEPENDENT RESEARCH RELATED TO SECURITY ENGINEERING.

MINIMUM REQUIREMENTS:

MASTER’S DEGREE IN COMPUTER SCIENCE, COMPUTER ENGINEERING, INFORMATION SECURITY OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE IN INFORMATION SECURITY CONSULTING, SECURITY ENGINEERING, APPLICATION SECURITY ENGINEERING, PRODUCT SECURITY ENGINEERING OR SECURITY FOCUSED DEVELOPMENT AT SOFTWARE COMPANIES.

ALTERNATE REQUIREMENTS:

BACHELOR’S DEGREE IN COMPUTER SCIENCE, COMPUTER ENGINEERING, INFORMATION SECURITY OR RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF EXPERIENCE IN INFORMATION SECURITY CONSULTING, SECURITY ENGINEERING, APPLICATION SECURITY ENGINEERING, PRODUCT SECURITY ENGINEERING OR SECURITY FOCUSED DEVELOPMENT AT SOFTWARE COMPANIES.

SPECIAL REQUIREMENTS:

MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Software Engineer
 
senior python scala cloud Jun 22
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES:
AS A MEMBER OF THE IDENTITY TEAM, BUILD WORLD-CLASS IDENTITY AND ACCESS MANAGEMENT SOLUTIONS FOR ATLASSIAN PRODUCTS. DRIVE THE TECHNICAL DIRECTION AND IMPLEMENTATION ACROSS IDENTITY, PRIVACY, AND ACCESS CONTROL TO ENSURE ATLASSIAN PRODUCTS REMAIN TRUSTWORTHY FOR ITS CUSTOMERS. USE STRONG ARCHITECTURE AND TECHNICAL PROCESS KNOWLEDGE AND HANDS ON CODING ABILITY TO DEVELOP AND IMPLEMENT NEW SECURITY AND IDENTIFY OR SPAM FEATURES AND IDENTIFY NEW MANAGEMENT SOLUTIONS AT SCALE. DESIGN, IMPLEMENT AND LAUNCH HIGHLY SECURE HIGH-PERFORMANCE RESTFUL MICROSERVICES IN A PUBLIC CLOUD INFRASTRUCTURE. BUILD TECHNOLOGICAL INFRASTRUCTURE AND SCALE PRODUCTS WHILE UTILIZING PROFESSIONAL KNOWLEDGE AND EXPERIENCE WITH MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES. DESIGN AND IMPLEMENT NEW SOFTWARE FEATURES AND FUNCTIONALITIES BASED ON LARGE SCALE DISTRIBUTED SYSTEMS. EMPLOY HANDS-ON CODING ABILITY AND SOFTWARE ARCHITECTURE TO FORECAST AND PROPOSE CHANGES OR IMPROVEMENTS TO PRODUCTS AND TECHNOLOGIES. COLLABORATE WITH OTHER ENGINEERING TEAMS, ENSURING INNOVATIVE WORK IS DELIVERED. DEVELOP AND DEPLOY SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS AND UTILIZE KNOWLEDGE OF AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (E.G., XP, SCRUM).

MINIMUM REQUIREMENTS:
MASTER’S DEGREE IN COMPUTER SCIENCE OR A RELATED FIELD OF STUDY AND 2 YEARS OF EXPERIENCE BUILDING TECHNOLOGICAL INFRASTRUCTURE AND SCALABLE SOLUTIONS USING MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), AND DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES, DEVELOPMENT AND DEPLOYMENT OF SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS, AND WORKING WITH AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (XP, SCRUM, ETC.).

ALTERNATE REQUIREMENTS:
BACHELOR’S DEGREE IN COMPUTER SCIENCE OR A RELATED FIELD OF STUDY AND 5 YEARS OF EXPERIENCE BUILDING TECHNOLOGICAL INFRASTRUCTURE AND SCALABLE SOLUTIONS USING MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), AND DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES, DEVELOPMENT AND DEPLOYMENT OF SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS, AND WORKING WITH AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (XP, SCRUM, ETC.).

SPECIAL REQUIREMENTS:
MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Fullstack Software Engineer, Confluence
 
javascript scala saas cloud aws frontend Jun 05
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for expert and hardworking Software Engineers to join Confluence team in our Mountain View, CA office. Our group has over 100 platform engineers building services that power the most critical parts of Atlassian’s experience. While we may be a big group, individuals on the newly formed platform adoption team can have a massive impact across the organization. We work strategically during post M&A activates to bridge user experiences across all our products through targeted platform adoption. This is a key part of Atlassian business model and is a high visibility role spanning multiple organizations.

On any given week you’ll be talking to engineers, product managers, designers and leaders across the company. If you are looking for an opportunity to not only tackle hard software integration problems but also hard company integration problem then this is the role for you. You’ll drive forward and own projects that can span 100 person teams of teams all while working with a hardworking team of engineers who have your back. You won’t always be measured by your code, but by the outcomes you can produce by bringing a diverse set of people together to achieve the best outcomes. Your thought leadership and solution architecture will sought after as people look to you for solutions to the hardest problems in the company.

On your first day, we'll expect you to have:

  • Experience in Scala NodeJS
  • Experience with React and other front end JavaScript frameworks
  • 3+ years of experience crafting and implementing high-performance RESTful micro-services serving millions of requests a day
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public or private cloud offerings (e.g., AWS, GCP, Azure)
  • Previously worked across multiple codebases when delivering features
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, with a team on short and long-running projects
  • Comprehensive understanding of microservices based architecture
  • A champion of practices like continuous delivery and infrastructure as code

It’s awesome, but not required if you have:

  • 6+ years of industry experience as a Software Engineer
  • Comprehensive knowledge about identity platforms, IDPaaS such as Auth0, Authentication, and Authorization
  • Experience working as a Solutions Architect or a background in consulting
  • Experience with large scale distributed systems and event-driven architectures
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Associate Solutions Architect
java python scala big data linux cloud Jun 03
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in North East (Remote) with 60-70% travel

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product features
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Backend Engineer
scala php java elasticsearch postgresql backend May 30

Ascendify is looking for a full time Backend Engineer to join our team. 
As a Backend Engineer you will work with in our Backend Engineering team to build new and maintain existing projects. You must be capable of working in a fast-paced, rapidly changing environment, be self-motivated, results-driven and detail-oriented to achieve success. 

A successful candidate can work remotely for this role but must be able to work during core Pacific Standard Time hours. They must be able to be productive working in a remote environment without direct supervision.  They must also be able to legally be able to work in the United States without the need for sponsorship.  Candidates outside the United States need not apply.

Responsibilities:

  • Write high-performance, reusable, modular code
  • They must write automated unit tests
  • Create new functions and features to improve the Ascendify product
  • Be able to write Technical Specs for new features including Database ERD Diagrams

Qualifications:

  • 5+ year of experience working with a scripting language; Python, PHP or Ruby
  • 3+ years in compiled language Scala, Java, etc
  • Experience working with an Object Oriented language
  • SQL experience
  • ElasticSearch experience
  • Extraordinary communication skills
  • Willing/able to learn (if needed, and) primarily use PHP and Scala 

Preferences:

  • B.S. in Computer Sciences or related discipline
  • Experience with Play Framework
  • DevOps experience is a plus
Share this job:
Senior Engineer
 
senior java python javascript scala saas May 12
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES: 

RESPONSIBLE FOR MAKING ATLASSIANS CLOUD SCALE USAGE OPEN IN THE ENTERPRISES BY BUILDING OUT OF THE ENTERPRISE GRADE SCALE ACCOMPANIED WITH GOVERNANCE, TRUST AND SECURITY. RESPONSIBLE FOR COLLABORATING WITH A TEAM OF PRODUCT MANAGERS, DESIGNERS AND ARCHITECTS TO BUILD ATLASSIANS APPLICATION LAYER SERVICES TO ENSURE THE ADMINISTRATION OF ATLASSIAN PRODUCTS AND PROCESSES ARE SIMPLE AT THE ENTERPRISE SCALE BASED ON FLUENCY IN ANY MODERN OBJECT-ORIENTED PROGRAMMING LANGUAGE INCLUDING BUT NOT LIMITED TO JAVA, JAVA SPRING FRAMEWORK, SCALA, PYTHON AND JAVASCRIPT DRIVE ATLASSIANS INNOVATIVE SOFTWARE PRODUCTS AND PROCESSES BY IDENTIFYING NEW WAYS TO SOLVE TECHNICAL ISSUES USING KNOWLEDGE OF DATABASE TECHNOLOGY E.G. RDBMS LIKE ORACLE OR POSTGRES AND/OR NOSQL LIKE DYNAMODB, MONGODB OR CASSANDRA AND KNOWLEDGE AND UNDERSTANDING OF SAAS, PAAS, LAAS INDUSTRY WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS INCLUDING BUT NOT LIMITED TO AWS, AZURE, GCP. RESPONSIBLE FOR MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS. USE KNOWLEDGE OF CLOUD ARCHITECTURE PATTERNS. IDENTIFY OPPORTUNITIES FOR IMPROVEMENT TO THE DESIGN, INTERFACE AND ARCHITECTURE OF ATLASSIANS SOFTWARE PRODUCTS ON THE CLOUD. COMMIT CHALLENGING CURRENT SOFTWARE TRENDS IN THE CLOUD DEVELOPMENT MARKET IN ORDER TO CREATE A SOLID EXPERIENCE ACROSS THE ATLASSIAN BRAND. MONITOR ALL PRODUCTION SYSTEMS IN THE AWS, REMEDIATE TECHNICAL ISSUES WHEN DISCOVERED AND MAINTAIN THREE-NINE AVAILABILITY FOR THE SERVICES INVOLVED. CRAFT AND IMPLEMENT HIGH-PERFORMANCE RESTFUL MICRO-SERVICES THAT SERVE MILLIONS OF REQUESTS PER DAY.

MINIMUM REQUIREMENTS: 

BACHELORS DEGREE IN COMPUTER SCIENCE, INFORMATION SYSTEMS OR A CLOSELY RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF EXPERIENCE AS A SOFTWARE DEVELOPER WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS (AWS, AZURE, GCP), RELATIONAL DATABASES SUCH AS POSTGRES, JAVA SPRING FRAMEWORK, NOSQL SUCH AS DYNAMODB ORMONGODB, MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS. ALTERNATE REQUIREMENTS: MASTERS DEGREE IN COMPUTER SCIENCE, INFORMATION SYSTEMS OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE AS A SOFTWARE DEVELOPER WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS (AWS, AZURE, GCP),RELATIONAL DATABASES SUCH AS POSTGRES, JAVA SPRING FRAMEWORK, NOSQL SUCH AS DYNAMODB OR MONGODB, MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS.

SPECIAL REQUIREMENTS: MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Cloud Scala Software Developer
scala playframework cloud java javascript docker May 08

Cloud Scala Software Developer (Remote United States)

At Railroad19, we develop customized software solutions and provide software development services.  We are currently seeking a Scala Software Developer that is fluent in Scala and web applications.  The successful engineer will be a technical resource for the development of clean and maintainable code. In addition to contributing code and tangible deliverables the role is expected to work as an adviser to help identify, educate, and foster best-in-class solutions. Creating these relationships requires strong communication skills.


At Railroad19, you are part of a company that values your work and gives you the tools you need to succeed. We are headquartered in Saratoga Springs, New York, but we are a distributed team of remote developers across the US. 

This is a full-time role with vacation, full benefits and 401k.  Railroad19 provides competitive compensation with excellent benefits and a great corporate culture.


The role is remote - U.S. located, only full time (NO- contractors, Corp-to-Corp or 1099).  

Core responsibilities:

  • Understand our client's fast-moving business requirements
  • Negotiate appropriate solutions with multiple stakeholders
  • Write and maintain scalable enterprise quality software
  • Develop new applications and production application support
  • Participate in detailed technical design, development, and implementation of applications using existing and emerging technology platforms.
  • Manage the complete software development life cycle
  • Writing functional and unit tests in order to maintain code quality
  • Develop understanding of client business processes, objectives, and solution requirements.
  • Participate in project work groups with subject matter experts and stakeholders to understand specific needs
  • Collaborate with other teams in order to deliver a highly performance application that contains little or no defects
  • Identify new opportunities, tools, and services to enhance the custom software platform
  • Support and troubleshoot issues (process & system), identify root cause, and proactively recommend sustainable corrective actions

Skills & Experience:

  • Advanced Scala development-based software solutions
  • Extensive enterprise experience in web applications
  • Enterprise experience with relational and nonrelational database
  • Hands on experience with Azure and/or Google cloud, Docker or Container Orchestration (Kubernetes) is a plus
  • Hands on experience with Postgres, MySQL, or Redis technologies is a plus
  • Hands on experience with Play framework
  • Hands on experience with Java 8 a plus
  • Hands on experience with NoSQL technologies
  • Familiar React and/or similar JavaScript frameworks is a plus
  • Demonstrates willingness to learn new technologies and takes pride in delivering working software
  • Excellent oral and written communication skills, analytical, and problem-solving skills
  • Experience participating on an agile team
  • Is self-directed and can effectively contribute with little supervision
  • Experience in Banking/Finance fields a plus
  • Bachelor's or master's degree in computer science, computer engineering, or other technical discipline; or equivalent work experience
Share this job:
Senior Backend Developer
komoot  
aws java scala kotlin backend senior May 06

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and, with more than 10 million users and 100,000 five-star reviews - komoot is on its way to become one of the most popular cycling and hiking platforms.
Join our fully remote team of 60+ people and change the way people explore!

As komoot’s next backend engineer, you join a highly-motivated team of tech enthusiasts. We are focused on impact, that’s why we love to find simple and smart solutions to complex problems, and embrace modern technologies to face our tough challenges.
Join us if you live and love infrastructure as code, automating workflows, x10 scaling challenges and building resilient, self-healing micro-services.

Why you will love it

  • You’ll work on a global product that inspires millions of users to enjoy the great outdoors
  • Positively impact millions of users directly with your onboarding project
  • Due to the nature of our data and our scale, you will face interesting challenges that take innovative, non-standard solutions
  • We believe good ideas count more than titles
  • You’ll take ownership over your projects from day one
  • Small overhead: you will work in a small and effective cross-functional team
  • You’ll work together with enthusiastic engineers, hikers and cyclists.
  • We let you work from wherever you want, be it a beach, the mountains, your house, co - working location of your choice, our HQ in Potsdam or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides. Check out this video to find out more about our team.

What you will do

  • Implement new product features closely with client developers, designers, copy writers, quality assurance, data scientists and product managers
  • Keep our system state-of-the-art and resilient for our fast growing traffic
  • Develop end-to-end solutions including concept, road map planning, implementation, testing, deployment and monitoring
  • Build new micro-services with Kotlin and AWS, and improve existing ones
  • Work on high-traffic online services (like REST APIs) and offline workers for data crunching

You will be successful in this position if you

  • Are highly self-driven, responsible and keen to learn and improve
  • Have 3+ years of professional experience in developing distributed and resilient web applications
  • Have 3+ years of professional experience with Kotlin, Java or Scala
  • Have 3+ years of professional experience with AWS, Google Cloud or Microsoft Azure
  • Have experience with Infrastructure as Code, continuous integration & deployment and monitoring
  • Enjoy paying attention to details and care about solid solutions
  • Are a great communicator in a diverse team

Sounds like you?

Then send us the following

  • Your CV
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, or your OpenStreetMap profile if you have one
Share this job:
Senior Fullstack Engineer, Confluence
 
senior javascript scala saas cloud aws May 05
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for expert and hardworking Software Engineers to join Confluence team in our Mountain View, CA office. Our group has over 100 platform engineers building services that power the most critical parts of Atlassian’s experience. While we may be a big group, individuals on the newly formed platform adoption team can have a massive impact across the organization. We work strategically during post M&A activates to bridge user experiences across all our products through targeted platform adoption. This is a key part of Atlassian business model and is a high visibility role spanning multiple organizations.

On any given week you’ll be talking to engineers, product managers, designers and leaders across the company. If you are looking for an opportunity to not only tackle hard software integration problems but also hard company integration problem then this is the role for you. You’ll drive forward and own projects that can span 100 person teams of teams all while working with a hardworking team of engineers who have your back. You won’t always be measured by your code, but by the outcomes you can produce by bringing a diverse set of people together to achieve the best outcomes. Your thought leadership and solution architecture will sought after as people look to you for solutions to the hardest problems in the company.

On your first day, we'll expect you to have:

  • Experience in Scala NodeJS
  • Experience with React and other front end JavaScript frameworks
  • 3+ years of experience crafting and implementing high-performance RESTful micro-services serving millions of requests a day
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public or private cloud offerings (e.g., AWS, GCP, Azure)
  • Previously worked across multiple codebases when delivering features
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, with a team on short and long-running projects
  • Comprehensive understanding of microservices based architecture
  • A champion of practices like continuous delivery and infrastructure as code

It’s awesome, but not required if you have:

  • 6+ years of industry experience as a Software Engineer
  • Comprehensive knowledge about identity platforms, IDPaaS such as Auth0, Authentication, and Authorization
  • Experience working as a Solutions Architect or a background in consulting
  • Experience with large scale distributed systems and event-driven architectures
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Data Engineer
 
java python scala big data aws May 04
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.

On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.

You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Presto, Spark, Airflow and Hive. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.

More about you
As a data engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.

On your first day, we'll expect you to have:

  • At least 3 years professional experience as a software engineer or data engineer
  • A BS in Computer Science or equivalent experience
  • Strong programming skills (some combination of Python, Java, and Scala preferred)
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience writing SQL, structuring data, and data storage practices
  • Experienced building data pipelines and micro services
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

It's preferred, but not technically required, that you have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • A passion for building and running continuous integration pipelines.
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide.

It’s the data engineering teams job to make more Atlassian’s data driven and facilitate growth. We do this by providing metrics and other data elements which are reliable and trustworthy, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team with a brand new mission, expanding into a new office. There will be plenty of challenges and scope to grow. We work very closely with Sales, Marketing and Commerce teams. We value when people ask hard questions and challenge each other to constantly improve our work. We are independent but love highly collaborative team environments, so you'll get the opportunity to work with lots of other awesome people just like you. We're all about enabling teams to execute growth and customer retention strategies by providing the right data fabrics and tools.

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Backend Software Engineer, Identity Platform
 
backend java scala saas cloud aws Apr 14
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a talented backend software engineer to build the next generation Identity Platform.

Over the last two years, Identity team has completely rebuilt their infrastructure around a micro services architecture with highly scalable services utilizing AWS resources. Aside from maintaining and growing user management features, Identity is responsible for operating its infrastructure reliably at a massive and constantly growing scale.

All products and services at Atlassian integrate with the Identity Platform, so you will collaborate with other Developer teams, Product Managers, Quality Engineers, and Support Engineers to ship an Identity experience that our users love. You will directly impact our customers' experience through the design and implementation of new features and functionalities. You will be a part of a small and high-energy team delivering improvements for our Identity infrastructure that powers all of our Cloud products.

On your first day, we'll expect you to have:

  • 4+ years of relevant industry experience
  • Specialization in Java and Spring Framework
  • Proven understanding of micro-services oriented architecture and extensible REST APIs
  • Experience with AWS cloud infrastructure
  • Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra)

It’s awesome, but not required if you have experience working with:

  • Knowledge of the principles to construct fault-tolerance, reliability and durability within software systems
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, while working with a team on short and long-running projects
  • Auth2, OpenID Connect, SAML protocols and encryption technologies
  • Relational databases, such as MySQL and PostgreSQL
  • Large scale distributed systems and event-driven architectures
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public cloud offerings (e.g., AWS, GAE, Azure)
  • Familiarity with other programming languages and frameworks, such as Node.js, Scala, and Go
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
  • Experience with continuous delivery and infrastructure as code
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Cloud Software Engineer
cloud senior golang java python scala Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role:

The Sr. Software Engineer role is part of the Engineering team from CrowdStrike Romania who will build globally distributed, fault tolerant and highly scalable cloud-based critical systems using Golang.

Don't worry if you don't know Golang, we will teach you!

If you are a hands-on engineer who loves to operate at scale, let's talk!

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Lead backend engineering efforts from rapid prototypes to large-scale application services across CrowdStrike products
  • Make it possible for internal teams to easily work with data at the petabyte scale
  • Leverage and build cloud based services to support our top rated security intelligence platform
  • Work with security researchers to troubleshoot time-sensitive production issues
  • Keep petabytes of critical business data safe, secure, and available
  • Brainstorm, define, and build collaboratively with members across multiple teams
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team
  • Be mentored and mentor other developers on web, backend and data storage technologies and our system
  • Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability
  • Be an energetic ‘self-starter’ with the ability to take ownership and be accountable for deliverables
  • Use and give back to the open source community

You'll use:

  • Golang
  • Python
  • Cassandra
  • Kafka
  • Elasticsearch
  • SQL
  • Redis
  • ZMQ
  • Hadoop
  • AWS Cloud
  • Git

What You’ll Need:

  • Bachelor's Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems)
  • Strong programming skills – Python / Java / Scala or Golang
  • The ability to design scalable and re-usable SOA services
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you
  • The desire to ship code and the love of seeing your bits run in production
  • Deep understanding of distributed systems and scalability challenges
  • Deep understand multi-threading, concurrency, and parallel processing technologies
  • Team player skills – we embrace collaborating as a team as much as possible
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration

Bonus Points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging)
  • Existing exposure to Golang, Scala, AWS, Cassandra, Kafka, Redis, Splunk
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

Bring your experience in distributed technologies and algorithms, your great API and systems design sensibilities, and your passion for writing code that performs at extreme scale. You will help build a platform that scales to millions of events per second and Terabytes of data per day. If you want a job that makes a difference in the world and operates at high scale, you’ve come to the right place.

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Site Reliability Engineer
golang scala machine learning cloud aws testing Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

At CrowdStrike we operate a massive cloud platform that protects our customers from a variety of bad actors: cyber criminals, hacktivists and state sponsored attackers. We process tens of billions of events a day and we store and use petabytes of data. We’re looking for an engineer who is passionate about site reliability and is excited about joining us to ensure our service runs 24/7.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Be responsible for all operational aspects of our platform - Availability, Latency, Throughput, Monitoring, Issue Response (analysis, remediation, deployment) and Capacity Planning with respect to Latency and Throughput. Build tooling to help monitor and analyze the platform
  • Work in a team of highly motivated engineers
  • Use your passion for technology to ensure our platform operates flawlessly 24x7
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team. We don’t expect you to know all the technology we use but you will be able to get up to speed on new technology quickly
  • Have broad exposure to our entire architecture and become one of our experts in overall process flow
  • Be a great code reader and debugger, you will have to dive into large code bases, identify issues and remediate
  • Have an intrinsic drive to make things better
  • Bias towards small development projects and the occasional larger project
  • Use and give back to the open source community

You'll use:

  • Go(Golang)
  • Python
  • ElasticSearch
  • Cassandra
  • Kafka
  • Redis, Memcached
  • AWS Cloud

Key Qualifications:

You have:

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • Experience as a sustaining engineering or SRE for a cloud based product.
  • Good understanding of distributed systems and scalability challenges – sharding, partitioning, scaling horizontally are second nature to you.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • Good understand of multi-threading, concurrency, and parallel processing technologies.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Team player skills – we embrace collaborating as a team as much as possible.

Bonus points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging).
  • Existing exposure to Go, Kafka, AWS, Cassandra, Elasticsearch, Scala, Hadoop, Spark
  • Prior experience in the Cyber Security or intelligence fields

Benefits of working at CrowdStrike:

  • Background or familiarity with File Integrity Monitoring (FIM), Cloud Security Posture Management (CSPM), or Vulnerability Management.
  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service event

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Federal Solutions Architect - Secret Clearance
java python scala big data linux cloud Apr 06
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 50% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • TS/SCI clearance required

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Scala/Kubernetes Engineer
Luna  
kubernetes aws terraform scala senior saas Apr 04

Overview

Luna is looking for a senior cloud software engineer to take charge of the design, development, and evolution of the new SaaS offering for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you bring strong technical skills and have a passion for collaboration, this role could be for you.

As a senior cloud software engineer, you'll be leading the effort to design and develop our new SaaS offering, providing a web-based version of Luna to our clients. Your work will be integral to the next phase of Luna's development, as we expand our offering beyond the open-source project. You'll be able to work with a world-class team of skilled engineers, community managers, and business developers (from Bloomberg, GitHub and PayPal to name a few), and put your indelible stamp on Luna's future.

What You'll Do

As a senior cloud software engineer you'll be in charge of building the SaaS offering for Luna, hosting both the language and its IDE in the cloud. This will involve:

  • Working closely with the internal teams to design a secure and scalable SaaS architecture.
  • Developing a SaaS solution based upon that design with robust tooling and reliability, as well as inbuilt support for collaboration.
  • Hosting the architecture on a cloud provider without becoming too dependent on any one given platform.
  • Contributing to the evolution of this vibrant open-source project by bringing a new component to its ecosystem and product offering.

The Skills We're Looking For

We have a few particular skills that we're looking for in this role:

  • 3+ years experience in designing secure, scalable, and collaboration-ready SaaS architectures.
  • A strong commitment to security and scalability that permeates your approach to design.
  • Experience with Kubernetes deployment and administration using EKS.
  • Experience with Scala and Akka.
  • Practical knowledge about AWS networking and storage architectures, and how they integrate with Kubernetes.
  • Experience managing AWS resources using Terraform.
  • Experience working in an SRE capacity on monitoring, incident handling and continuous service improvement.
  • Experience building and delivering CI/CD pipelines to ensure service stability and reliability.
  • Experience employing Devops practices such as the 'continuous everything' and 'everything as code' styles of work.
  • Experience working with Git, and preferably GitOps.

It would be a big bonus if you also had:

  • Skills working with Azure and GCP to help expand beyond AWS in the future.
  • Experience working in close conjunction with multiple product teams to ensure that the solutions you provide meet their needs.
Share this job:
Senior Software Engineer, Backend
Numbrs  
java backend microservices kubernetes machine-learning senior Mar 25

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

All candidates will have

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • experience with high volume production grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Full Stack Engineer - DSS
Dataiku  
full stack java python javascript scala big data Mar 13
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



As a full stack developer in the Dataiku engineering team, you will play a crucial role in helping us have a real impact on the daily life of data analysts and scientists. You will be joining one of 3 teams that develop new features and improve existing parts of  Data Science Studio (DSS) based on user feedback.

DSS is an on-premises application that connects together all big data technologies. We work with SQL databases, Spark, Kubernetes, Hadoop, Elasticsearch, MLlib, scikit-learn, Shiny, … and many more. Basically, our technological stack is made of all the technologies present in Technoslavia!

Our backend is mainly written in Java but also includes large chunks in Scala, Python and R. Our frontend is based on Angular and also makes vast usage of d3.js.

One of the most unique characteristics of DSS is the breadth of its scope and the fact that it caters both to data analysts (with visual and easy to use analytics) and data scientists (with deep integration in code and libraries, and a web-based IDE).

This is a full-time position, based in France either in our Paris office or remote.

Your missions

  • Turn ideas or simplistic specifications into full-fledged product features, including unit and end-to-end tests.
  • Tackle complex problems that range from performance and scalability to usability, so that complicated machineries look straightforward and simple to use for our users.
  • Help your coworkers: review code, spread your technical expertise, improve our tool chain
  • Bring your energy to the team!

You are the ideal recruit if

  • You are mastering a programming language (Java, C#, Python, Javascript, You-name-it, ...).
  • You know that low-level Java code and slick web applications in Javascript are two sides of the same coin and are eager to use both.
  • You know that ACID is not a chemistry term.
  • You have a first experience (either professional or personal) building a real product or working with big data or cloud technologies.

Hiring process

  • Initial call with the talent acquisition manager
  • On-site meeting (or video call) with the hiring manager
  • Home test to show your skills
  • Final on-site interviews


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:
Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Senior Data Engineer
Acast  
senior java scala big data docker cloud Feb 10
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

We are looking for a Senior Data Engineer to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as core dataset for business critical use-cases such as payouts to our podcasters. This team’s ambition is to transform our data into insights. The products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 

In this role you will work with other engineers, product owners within a cross functional agile team.

You

  • 3+ years of experience of building robust big data ETL pipelines within Hadoop Ecosystem: Spark, Hive, Presto, etc
  • Are proficient in Java or Scala and Python
  • Experience with AWS cloud environment: EMR, Glue, Kinesis, Athena, DynamoDB, Lambda, Redshift, etc.
  • Have strong knowledge in SQL, NoSQL database design and modelling, and knowing the differences on modern big data systems and traditional data warehousing
  • DevOps and infrastructure as code experience (a plus), familiar with tools like Jenkins, Ansible, Docker, Kubernetes, Cloudformation, Terraform etc
  • Advocate agile software development practices and balance trade-offs in time, scope and quality
  • Are curious and a fast learner who can adapt quickly and enjoy a dynamic and ever-changing environment

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our engineering team is mostly located in central Stockholm, but with a remote first culture we’re able to bring on people who prefer full time remote work from Sweden, Norway, UK, France and Germany.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Solutions Architect - Pacific Northwest
java python scala big data linux cloud Feb 07
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 60 -70% travel expected.
Anywhere in Pacific NorthWest

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Software Engineer at Jack Henry & Associates, Inc.
scala fs2 http4s microservices distributed-system senior Feb 05

At Banno, we believe that the world is a better place when community banks and credit unions exist to serve their communities. Our mission is to build the technology that gives community financial institutions the tools they need to compete against the big banks. Banno is redefining the relationship between forward-thinking financial institutions and their customers.


About You

You are infinitely curious and thrive in an environment where you are constantly learning and growing. You want to be somewhere that you are trusted and set up for success.  You want to be surrounded by other great engineers that drive you to be better every day.

Although you work in a team, you are self-motivated and able to work independently. You want to own the deliverable from start to finish by working with the product manager, defining the scope and seeing the work all the way through to deployment in production. You care deeply about your work, your team, and the end user.

Banno values trust and those with a bias towards action.  We are confident you will love it here.


What you and your team are working on

As a Senior Scala Engineer, you work with your team to provide APIs and back end services for a suite of digital banking products, including native mobile and web applications. Our APIs are first-class citizens and are consumed by both our internal teams as well as teams outside of Banno.

You are keeping our services up-to-date with the newest development and deployment practices. You are responsible for maintaining our services in a microservices environment and for implementing the tools necessary for observability and monitoring of those services.

This position can be worked 100% REMOTE from any US location.


Minimum Qualifications

  • Minimum 6 years of experience with server-side programming languages in production.

Preferred Qualifications

  • Knowledge of or experience with microservice architecture.
  • Experience with functional programming languages. 
  • Experience with the Scala libraries cats, http4s, and doobie.
  • Experience with event driven architecture using Kafka.
  • Experience with Observability and Monitoring.
Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Senior Data Engineer
Medium  
senior java python scala aws frontend Jan 29
At Medium, words matter. We are building the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas; a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers.

We are looking for a Senior Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback

What Will You Do!

  • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
  • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
  • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
  • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
  • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
  • Analyze, debug and maintain critical data pipelines.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.

Who You Are!

  • You have 7+ years of software engineering experience.
  • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
  • You have outstanding coding and design skills, particularly in Java/Scala and Python.
  • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
  • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
  • You have a BS in Computer Science / Software Engineering or equivalent experience.
  • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
  • You have a strong understanding & usage of algorithms and data structures.

Nice To Have!

  • Snowflake knowledge and experience
  • Looker knowledge and experience
  • Dimensional modeling skills
At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

Interested? We'd love to hear from you.
Share this job:
Consulting Engineer
java python scala big data linux azure Jan 17
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Consulting Engineers drive customer success by helping them realize business value from the burgeoning flow of real-time data streams in their organizations. In this role you’ll interact directly with our customers to provide software, development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases.  

Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art streaming data infrastructure alongside colleagues who are widely recognized as leaders in this space.

Promoting Confluent and our amazing team to the community and wider public audience is something we invite all our employees to take part in.  This can be in the form of writing blog posts, speaking at meetups and well known industry events about use cases and best practices, or as simple as releasing code.

While Confluent is headquartered in Palo Alto, you can work remotely from any location on the East Coast of the United States as long as you are able to travel to client engagements as needed

A typical week at Confluent in this role may involve:

  • Preparing for an upcoming engagement, discussing the goals and expectations with the customer and preparing an agenda
  • Researching best practices or components required for the engagement
  • Delivering an engagement on-site, working with the customer’s architects and developers in a workshop environment
  • Producing and delivering the post-engagement report to the customer
  • Developing applications on Confluent Kafka Platform
  • Deploy, augment, upgrade Kafka clusters
  • Building tooling for another team and the wider company
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles
  • Honing your skills, building applications, or trying out new product features

Required Skills:

  • Deep experience building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field
  • Ability to travel up to 60-75% of your time to client engagements

Nice to have:

  • Experience using Amazon Web Services, Azure, and/or GCP for running high-throughput systems
  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Python, Scala, or Go
  • Experience with configuration and management tools such as Ansible, Teraform, Puppet, Chef
  • Experience writing to network-based APIs (preferably REST/JSON or XML/SOAP)
  • Knowledge of enterprise security practices and solutions, such as LDAP and/or Kerberos
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Data Scientist
r machine-learning python apache-spark cluster-analysis senior Jan 08

In the Senior Data Scientist role, you will have full ownership over the projects you tackle, contribute to solving a wide range of machine learning applications, and find opportunities where data can improve our platform and company. We are looking for an experienced and creative self-starter who executes well and can exhibit exceptional technical know-how and strong business sense to join our team. 


WHAT YOU'LL DO:

  • Mine and analyze data from company data stores to drive optimization and improvement of product development, marketing techniques and business strategies
  • Assess the effectiveness and accuracy of data sources and data gathering techniques
  • Develop and implement data cleansing and processing to evaluate and optimize data quality
  • Develop custom data models and algorithms to apply to data sets
  • Run complex SQL queries and existing automations to correlate disparate data to identify questions and pull critical information
  • Apply statistical analysis and machine learning to uncover new insights and predictive models for our clients
  • Develop company A/B testing framework and test model quality
  • Collaborate with data engineering and ETL teams to deploy models / algorithms in production environment for operations use
  • Develop processes and tools to monitor and analyze model performance and data accuracy
  • Ad-hoc analysis and present results in a clear manner
  • Create visualizations and storytelling
  • Communicate Statistical Analysis and Machine Learning Models to Executives and Clients
  • Create and manage APIs

WHO YOU ARE:

  • 3-5+ years of relevant work experience
  • Extensive knowledge of Python and R
  • Clear understanding of various analytical functions (median, rank, etc.) and how to use them on data sets
  • Expertise in mathematics, statistics, correlation, data mining and predictive analysis
  • Experience with deep statistical insights and machine learning ( Bayesian, clustering, etc.)
  • Familiarity with AWS Cloud Computing including: EC2, S3, EMR.
  • Familiarity with Geospatial Analysis/GIS
  • Other experience with programming languages such as Java, Scala and/or C#
  • Proficiency using query languages such as SQL, Hive, and Presto
  • Familiarity with BDE (Spark/pyspark, MapReduce, or Hadoop)
  • Familiarity with software development tools and platforms (Git, Linux, etc.)
  • Proven ability to drive business results with data-based insights
  • Self-initiative and an entrepreneurial mindset
  • Strong communication skills
  • Passion for data

WHAT WE OFFER:

  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • Lunch provided 2x a week 
  • Snacks, snacks, snacks!
  • Casual dress code
Share this job:
Senior Software Engineer, Data Pipeline
java scala go elasticsearch apache-spark senior Dec 31 2019

About the Opportunity

The SecurityScorecard ratings platform helps enterprises across the globe manage the cyber security posture of their vendors. Our SaaS products have created a new category of enterprise software and our culture has helped us be recognized as one of the 10 hottest SaaS startups in NY for two years in a row. Our investors include both Sequoia and Google Ventures. We are scaling quickly but are ever mindful of our people and products as we grow.

As a Senior Software Engineer on the Data Pipeline Platform team, you will help us scale, support, and build the next-generation platform for our data pipelines. The team’s mission is to empower data scientists, software engineers, data engineers, and threat intelligence engineers accelerate the ingestion of new data sources and present the data in a meaningful way to our clients.

What you will do:

Design and implement systems for ingesting, transforming, connecting, storing, and delivering data from a wide range of sources with various levels of complexity and scale.  Enable other engineers to deliver value rapidly with minimum duplication of effort. Automate the infrastructure supporting the data pipeline as code and deployments by improving CI/CD pipelines.  Monitor, troubleshoot, and improve the data platform to maintain stability and optimal performance.

Who you are:

  • Bachelor's degree or higher in a quantitative/technical field such as Computer Science, Engineering, Math
  • 6+ years of software development experience
  • Exceptional skills in at least one high-level programming language (Java, Scala, Go, Python or equivalent)
  • Strong understanding of big data technologies such as Kafka, Spark, Storm, Cassandra, Elasticsearch
  • Experience with AWS services including S3, Redshift, EMR and RDS
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions

What to Expect in Our Hiring Process:

  • Phone conversation with Talent Acquisition to learn more about your experience and career objectives
  • Technical phone interview with hiring manager
  • Video or in person interviews with 1-3 engineers
  • At home technical assessment
  • Video or in person interview with engineering leadership
Share this job:
Software Engineering Manager
scala functional-programming http4s fs2 scala-cats manager Dec 30 2019

As an Engineering Manager on a services team for the Banno Platform at Jack Henry, you’ll get the chance to make a positive impact on people’s lives. We believe that the world is a better place with community banks and credit unions. Our mission is to build the technology that gives community banks and credit unions the tools they need to compete against the big banks.

Service teams create highly scalable public APIs used by millions of customers to normalize access to multiple banking systems for use in our mobile and online banking clients. You’ll work on a team deploying and monitoring their own services. Our platform is primarily functional Scala, followed by a few services written in Haskell, Node.js and Rust.

Ideal candidates are self-motivated, technically competent servant leaders with experience building, mentoring and growing their team. The first six months will be spent as an individual contributor engineer on the team, learning the domain and building trust with team members.

We are committed to creativity, thoughtfulness, and openness. Our team is highly distributed, meaning you will work with kind, talented engineers from across the United States. Occasional travel may be required for professional development conferences or company meetings.

This is a remote position with the ability to collocate at several JHA locations nationwide if desired.

Minimum Qualifications

  • Minimum 7 years of experience with server-side programming languages.
  • Minimum 1 year of team lead, supervisory or management experience.
  • Minimum 1 year developing, maintain, and supporting public facing API in production.
  • Knowledge of or experience with microservice architecture in a production environment.

Preferred Qualifications

  • Experience with Scala or Haskell in a production environment.
  • Understanding of the functional programming paradigm.
  • Experience with the cats, fs2, http4s, and doobie libraries.
  • Experience with tools like Kafka, Kinesis, AWS Lambda, Azure Functions.
  • Experience with Kubernetes.

Essential Functions

  • Oversees the daily operation of one or more engineering teams.
  • Assists team in the development and implementation of policies, procedures and programs.
  • Mentors, coaches and assists in the career development of team members and participates in frequent one-on-ones.
  • Completes product technical design and prototyping, software development, bug verification and resolution.
  • Performs system analysis and programming activities which may require research.
  • Provides technical/engineering support for new and existing applications from code delivery until the retirement of the application.
  • Provides reasonable task and project effort estimates.
  • Ensures timely, effective, and quality delivery of software into production.
  • Develops and tests applications based on business requirements and industry best practices.
  • Creates required technical documentation.
  • Periodically troubleshoots during off hours for system failures.
  • Participates in an on-call rotation supporting team owned services.
  • Collaboratively works across teams to ensure timely delivery of high-quality products.
  • Collaboratively works with customer support team to resolve or diagnose defects.
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Data Engineer
python pyspark sql aws scala Dec 25 2019
  • Solid programming background in Python
    Experience extracting and loading data to relational databases and optimizing SQL queries
    Familiar with the Hadoop ecosystem, mainly with HDFS, the Hive and Spark: we do pyspark but Scala would also be considered
    Experience with these AWS services: Glue, Athena, Lambda, EMR
    Knowledge of orchestration tools such as Airflow, Oozie, AWS Step Functions
    Nice to have:
    Experience with Kafka and Kinesis
  • Proficiency in English and Spanish.
Share this job:
VP of Engineering - Series A Funded Data Startup
scala python machine-learning apache-spark hadoop machine learning Dec 24 2019
About you:
  • High velocity superstar.
  • You want to challenge of growing and managing remote teams
  • You love really hard engineering challenges
  • You love recruiting and managing super sharp people
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • you walk through walls 
  • you want to help build a massive company
  • you live in the United States or Canada
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco, Denver, and New York City but about 50% of the team is remote (all currently in the U.S.). We get the entire company together in the same place every month.


About the role:


  • Member of the executive team and reporting directly to the CEO.
  • Oversee all engineering and machine learning
  • Core member of the executive team 

Opportunity to be:

  • one of the first 40 people in a very fast growing company 
  • be one of the core drivers of company's success 
  • work with an amazing engineering team 
  • be on the executive team 
  • potential to take on more responsibility as company grows 
  • work with only A-Players
Share this job:
Senior Big Data Software Engineer
scala apache-spark python java hadoop big data Dec 23 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
  • You live in the U.S. or Canada and are comfortable working remotely.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Strong familiarity with map/reduce programming models.
  • Deep understanding of all things “database” - schema design, optimization, scalability, etc.
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, Data Science, Apache Spark, Hadoop MapReduce.
Share this job:
Senior iOS Developer
ios swift objective-c rx-swift senior scala Dec 15 2019

Get to know us

We create open-source software that puts users in control over their online browsing experience. Our desktop and mobile products, such as Adblock Plus (ABP), Adblock Browser and Flattr, help sustain and grow a fair, open web because they give users control while providing user-friendly monetization. Our most popular product, Adblock Plus (ABP), is currently used on over 100 million devices.

Here’s the big picture

Work on ABP iOS and macOS development, focusing on lower-level tasks. You will be working on complex issues, both on mobile and on browser development. Not to brag or anything, but look at how many projects you can work on, and everything is open source:

  • ABP for Safari on iOS
  • Adblock Browser for iOS
  • ABP for Safari on macOS
  • ABPKit (framework), the backbone of our products and the foundation for our partner products

After your morning coffee, you’ll be expected to do...

  • iOS (80% focus) and some macOS development using Objective-C, Swift, RxSwift
  • Core development of libraries, backend, server-side software
  • Development of iOS and macOS apps
  • Development of new products

and the rest...

  • Consulting with partners
  • Maintaining existing products
  • Strengthening the underlying technology and backend of our mobile core products
  • Working on core content blocking functionality
  • Finding innovative solutions in a very limited content blocking environment

We trust you to work from home if you have...

  • Multiple years of iOS, Swift, and Objective-C development
  • Advanced programming experience equivalent to programming with RxSwift for significant application services
  • Knowledge of algorithms and data structures (at computer science 4-year level)
  • Debugging skills (multithreading, concurrency, memory lifetimes, parallelization)
  • Expertise with HTTP protocols, database operations (SQL/NoSQL), and functional programming (e.g. Haskell, Scala, F#, Rust, Swift, JavaScript)
  • Experience in interoperability with Swift and Objective-C
  • Ability to write accurate, concise, and complete technical documentation

You can do this job in your sleep if you also have experience in...

  • Browser development
  • Content blocking
  • Working in agile teams
  • Open source development

A little bit about the team you’ll work with

The iOS/macOS team is a globally distributed team that works on multiple projects. Depending on priorities, we decide how we want to work on each level. We have bi-weekly video meetings, but most of the communication happens over IRC, email, and our issue tracking system.

Share this job:
Remote Senior Data Engineer
Hays  
scala senior python docker aws testing Dec 08 2019
Hays Specialist Recruitment is working in partnership with Security Scorecard to manage the recruitment of this position

The end client is unable to sponsor or transfer visas for this position; all parties authorized to work in the US without sponsorship are encouraged to apply.

This position is NOT eligible for subcontractors or those that require sponsorship.

Hays is conducting an exclusive search for a Senior Data Engineer, for a Cybersecurity company based in NYC. Security Scorecard builds a very unique product that rates cybersecurity postures of corporate entities through the scored analysis of cyber threat intelligence signals for the purposes of third party management and IT risk management. They have a very modern Technology stack and work in a dynamic & agile environment.

The position is a 100% remote and you'll be responsible for the management of the Analytic pipeline using Spark, Hardoop etc. Leverage cutting-edge technologies to support new and existing services and processes, drive projects through all stages of development and improving the effective output of the engineering team by managing quality and identifying inconsistencies. Your experience should involve 5+ years with Scala or another functional language (commercial environment preferred), 3+ years with Spark and the Hadoop ecosystem (or similar frameworks), Familiarity with tools like AWS and Docker, experience working with 3rd party software and Expert skills with SQL

Remote Senior Data Engineer - Perm - New York, NY

Remote Senior Data Engineer Skills & Requirements

Responsibilities
* Manage the analytic pipeline using Spark, Hadoop, etc
* Leverage cutting-edge technologies to support new and existing and services and processes.
* Quickly and efficiently design and implement in an agile environment
* Work with other team members to implement consistent architecture
* Drive projects through all stages of development
* Actively share knowledge and responsibility with other team members and teams
* Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.

Requirements:
3+ years of experience with:
* Scala or Python, both preferred
* Distributed systems (e.g. Spark, Hadoop)
* Database systems (e.g. Postgres, MySQL)
Experience with the following is preferred:
* IP (v4/v6) allocation and addressing conventions
* DNS conventions and best practices
* Anti-abuse investigations
* Bachelor's degree (CS, CE/EE, Math, or Statistics preferred)

Why Hays?

You will be working with a professional recruiter who has intimate knowledge of the Information Technology industry and market trends . Your Hays recruiter will lead you through a thorough screening process in order to understand your skills, experience, needs, and drivers. You will also get support on resume writing, interview tips, and career planning, so when there's a position you really want, you're fully prepared to get it. Additionally, if the position is a consulting role, Hays offers you the opportunity to enroll in full medical, dental or vision benefits.

* Medical
* Dental
* Vision
* 401K
* Life Insurance ($20,000 benefit)

Nervous about an upcoming interview? Unsure how to write a new resume?

Visit the Hays Career Advice section to learn top tips to help you stand out from the crowd when job hunting.

Hays is an Equal Opportunity Employer.

Drug testing may be required; please contact a recruiter for more information.

Share this job:
Senior Data Engineer - Spark expertise
scala postgresql senior data science docker aws Dec 05 2019

Position Summary

The Senior Data Analytics Engineer will build meaningful analytics that inform companies of security risk.  You will be working closely with our Data Science team, implementing algorithms and managing the analytic pipeline. We have over 1 PB of data, so the ideal candidate will have experience processing and querying large amounts of data.  

This role requires senior level experience in Spark, SQL and Scala. Our interview process will include live coding using these technologies!

Responsibilities

  • Manage the analytic pipeline using Spark, Hadoop, etc 
  • Leverage cutting-edge technologies to support new and existing and services and processes.
  • Quickly and efficiently design and implement in an agile environment
  • Work with other team members to implement consistent architecture
  • Drive projects through all stages of development
  • Actively share knowledge and responsibility with other team members and teams
  • Improve the effective output of the engineering team by managing quality, and identifying inconsistencies.  

Skills and Experience:

  • Bachelor's degree (CS, EE or Math preferred) or equivalent work experience as well as interest in a fast paced, complex environment.
  • 5+ years of experience Scala preferred in a commercial environment 
  • Expert in Spark, experience with the Hadoop ecosystem and similar frameworks
  • Expert in SQL
  • Familiarity with various tools such as AWS and Docker and an instinct for automation
  • Strong understanding of Software Architecture principles and patterns.
  • Experience working with 3rd party software and libraries, including open source
  • Experience with Postgres

Traits:

  • Quick-thinker who takes ownership and pride in their work
  • A commitment and drive for excellence and continual improvement 
  • A strong sense of adventure, excitement and enthusiasm.
  • Excellent systems analytical, problem solving and interpersonal skills

Interview Process:

  • Initial Conversation with a SecurityScorecard Talent team to learn more about your experience and career objectives
  • Technical Interview with 1- 2 data engineers. This will include live coding in SQL, Spark, Scala.
  • Coding Exercise - take home exercise
  • Final Interview: Meet 1-2 engineering leaders
Share this job:
Software Engineer - .NET Platform Developer
Percona  
dot net java python scala php big data Dec 02 2019
If you like working with the developer community for an Engagement Database and being in the front lines of integration of our product into various technology stacks, this is for you.   This is your chance to disrupt a multi-billion-dollar industry, change how the world accesses information, and reinvent the way businesses deliver amazing customer experiences. As a Software Engineer in SDK and Connector engineering team, you’ll work on the developer interface to Couchbase Server for JVM platform languages including the Java SDK, future platforms like Scala and Kotlin and contribute to connectors and frameworks such as Apache Spark and Spring Data. In your daily work, you will help the developer community to innovate on top of our Engagement Database.  You will have one of those rare positions of working with a market leading product and an Open Source community of users and contributors. The skill set and expectations are…

Responsibilities

  • Take on key projects related to the development, enhancement and maintenance of Couchbase’s products built on the JVM platform core-io including the Java SDK and new platforms we add.  Create, enhance and maintain to other JVM related projects such as the Kotlin client, the Spring Data Connector and others.
  • Contribute to the creation, enhancement and maintenance of documentation and samples that demonstrate how Java based languages and platforms work with Couchbase.
  • Create, enhance and maintain various documentation artifacts designed to make it easy for developers and system architects to quickly become productive with Couchbase.
  • Maintain, nurture and enhance community contributions to the Couchbase community and forums from the overall Couchbase community.
  • Work with the growing community of developers who will want to know how to develop Java, Kotlin, Spring, .NET, Node.js, PHP, Python and higher level frameworks with applications built on Couchbase.

Qualifications

  • The right person for this role will be a self-motivated, independent, and highly productive individual, with ability to learn new technologies and become quickly proficient.
  • Must have a minimum of 5 years of software development experience in a professional software development organization.  Ideally, this would be working on platform level software.
  • Should be familiar with modern, reactive, asynchronous software development paradigms such as Reactor and Reactive Streams.
  • Should have experience with binary streaming wire protocols, such as those in Couchbase.  Experience with streaming protocols based on Apache Avro and data formats such as those in Apache Kafka would be good.
  • Should have familiarity with web application development beyond Spring Framework, such as in Play Framework or others.  The ideal candidate would have familiarity with web application or mobile integration development in at least one other platform such as .NET or Java.
  • Must be familiar with consuming and producing RESTful interfaces.  May be familiar with GraphQL interfaces as well.
  • Would ideally be able to demonstrate experience in large scale, distributed systems and understand the techniques involved in making these systems scale and perform.
  • Has the ability to work in a fast paced environment and to be an outstanding team player.
  • Familiarity with distributed networked server systems that run cross-platform on Linux and Windows is highly desired.
  • Experience with git SCM, and tools such as Atlassian, JIRA and Jenkins CI are also strongly desired.
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Messaging Systems Architect
Ockam  
scala design Nov 19 2019

We are seeking an Elixir/Erlang Systems Architect with expertise designing and building high throughput, concurrent, real time messaging and streaming systems. You should have deep experience with Erlang, Elixir, Scala or similar actor model based languages/tools for building fault tolerant distributed systems. Experience with the core internal design of systems like Kafka, RabbitMQ, Spark Steaming, Phoenix Channels, Akka or Riak are also required.

Responsibilities

    • Collaborate with the team with well communicated and documented processes
    • Develop high-quality software design and architecture
    • Identify, prioritize and execute tasks in the software development lifecycle
    • Develop tools and applications by producing clean, efficient code
    • Automate tasks through appropriate tools and scripting
    • Review and debug code
    • Perform validation and verification testing
    • Document development phases and monitor systems
    • Ensure software is up-to-date with the latest technologies

Requirements

    • Extensive engineering experience across multiple systems with 10+ years of experience.
    • Comfort in switching between multiple programming languages.

Remote candidates are encouraged to apply. Ockam is a distributed, remote-first structured team with a headquarters in San Francisco California.

Share this job:
Code Challenge Reviewer
Geektastic   $0K
java python javascript ruby css scala Nov 15 2019

Fancy earning extra cash reviewing code challenge submissions from any location?

We pay you £25 for each code challenge you review (30 minutes review time).  You can do as many or as few as you want per week.

We are looking for highly talented Java, JavaScript, PHP, Python, C#, Ruby, Scala, iOS and Android developers

Please read some comments made by our reviewers on Quora here 

We pay you via Transferwise, Revolut or Payoneer at the end of the month (unless you are in the UK, in which case we bank via bank transfer). 

To become part of the team you just need to register with us at Geektastic and take some code challenges. These are reviewed by our expert team (we need to know how great you are :))

Once you are part of the distributed team you will then be notified on our Slack channel when a new challenge is ready to be reviewed.

Feel free to email hello@geektastic.com if you have any questions

Share this job:
Senior Product Engineer: Back end
x.ai  
scala python senior javascript aws api Nov 13 2019

We are building some really exciting sh*t at x.ai

At x.ai, we're building artificial intelligence super-powered productivity software. The software schedules meetings for our customers automatically, without subjecting them to the typical back and forth over email negotiating when and where to meet someone. We're looking for a self-motivated and enthusiastic individual to join us on the journey in building this new frontier. You’ll get to work side by side with a group of focused and passionate individuals in a fully distributed setting.


Responsibilities

  • Work with product team to identify and define features that solve customer pain in a manner that’s easy to understand and explain
  • Leverage your Scala expertise to drive product design implementation and improvements
  • Iterate on ideas quickly from proof of concept to the final version
  • Become deeply familiar with the challenges we’re solving for customers and the technical approaches we’ve taken
  • Test the software you build, define edge cases and monitor system health in production
  • Identify and build metrics or tools to help us understand customer behavior
  • Define and champion best practices of software development
  • Take ownership of our technical stack: help improve documentation or find ways to make it easier to work on our system
  • Lead and collaborate in technical decision-making
  • Be able to manage your own time while making sure to communicate the status of your projects

Qualifications

  • 5+ years of relevant experience
  • Expert in Scala
  • Expert in API integrations
  • Experience in Typescript a plus
  • Experience in Javascript a plus
  • Experience in Python a plus
  • MongoDB, AWS, Mesos experience a plus
  • Customer obsessed
  • Thrives in a fully remote setting
Share this job:
Senior Scala Engineer
scala senior docker aws Nov 11 2019

This is an opportunity to work as part of a distributed technology team along with our product team to help define and deliver solutions for our clients.

Here are some of the qualities we’re looking for in a successful team member:

  • You strive to make everything around you better.
  • You are equally excited by experimenting with new technologies as you are about delivering value through maintainable, scalable, and reliable services.
  • You view software engineering less as writing code and more as delivering high-value, innovative solutions to real-world problems.
  • Some knowledge of corporate bonds is desired, but not mandatory for delivering the majority of our features.
  • You are skilled in concurrency, distributed message-based systems, and have a deep affinity for building reliable, high-throughput, lower latency solutions.
  • You can clearly communicate your ideas and give and accept direct feedback.
  • You are passionate about honing your craft inside and outside of work.
  • You can convey why you are attracted to working in a functional paradigm.

Our stack:

  • Scala with Akka Streams for efficient stream processing
  • Kafka for scalable messaging
  • Linux, Docker, Ansible, and AWS for dynamic environments
  • Google Apps, Slack, and Zoom for open communication
Share this job:
Senior Product Engineer: Front end
x.ai  
javascript node-js scala python senior aws Nov 08 2019

We are building some really exciting sh*t at x.ai

At x.ai, we're building artificial intelligence super-powered productivity software. The software schedules meetings for our customers automatically, without subjecting them to the typical back and forth over email negotiating when and where to meet someone. We're looking for a self-motivated and enthusiastic individual to join us on the journey in building this new frontier. You’ll get to work side by side with a group of focused and passionate individuals in a fully distributed setting.

Responsibilities

  • Work with product team to identify and define features that solve customer pain in a manner that’s easy to understand and explain
  • Iterate on ideas quickly from proof of concept to the final version
  • Become deeply familiar with the challenges we’re solving for customers and the technical approaches we’ve taken
  • Leverage your Javascript expertise to drive product design implementation and improvements
  • Test the software you build, define edge cases and monitor system health in production
  • Identify and build metrics or tools to help us understand customer behavior
  • Define and champion best practices of software development
  • Take ownership of our technical stack: help improve documentation or find ways to make it easier to work on our system
  • Lead and collaborate in technical decision-making
  • Be able to manage your own time while making sure to communicate the status of your projects

Qualifications

  • 5+ years of relevant experience
  • Expert in Node.js
  • Expert in building web apps
  • Expert in API integrations
  • Experience in Typescript a plus
  • Experience in Scala a plus
  • Experience in Python a plus
  • MongoDB, AWS, Mesos experience a plus
  • Customer obsessed
  • Thrives in a fully remote setting
Share this job:
Data Engineer-Remote
python scala big data aws design healthcare Nov 08 2019

Description

SemanticBits is looking for a talented Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

SemanticBits is a leading company specializing in the design and development of digital health services, and the work we do is just as unique as the culture we’ve created. We develop cutting-edge solutions to complex problems for commercial, academic, and government organizations. The systems we develop are used in finding cures for deadly diseases, improving the quality of healthcare delivered to millions of people, and revolutionizing the healthcare industry on a nationwide scale. There is a meaningful connection between our work and the real people who benefit from it; and, as such, we create an environment in which new ideas and innovative strategies are encouraged. We are an established company with the mindset of a startup and we feel confident that we offer an employment experience unlike any other and that we set our employees up for professional success every day.

Requirements

  • Bachelor’s degree in computer science (or related) and two to four years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Scala, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken English
  • Self-driven problem solver

Benefits

  • Generous base salary
  • Three weeks of PTO
  • Excellent health benefits program (Medical, dental and vision)
  • Education and conference reimbursement
  • 401k retirement plan. We contribute 3% of base salary irrespective of employee's contribution
  • 100% paid short-term and long-term disability
  • 100% paid life insurance
  • Flexible Spending Account (FSA)
  • Casual working environment
  • Flexible working hours
  • Self-driven problem solver

SemanticBits, LLC is an equal opportunity, affirmative action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, or any other characteristic protected by law. We are also a veteran-friendly employer.

Share this job:
Solutions Architect
phData  
scala java big data cloud aws testing Nov 05 2019

If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  

At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for three (3) consecutive years.   

As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.

In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and a long term incentive plan for employees. 

As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:

  • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  

  • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

  • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized

  • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources

  • Design and implement streaming, data lake, and analytics big data solutions


  • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines


  • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths


  • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)


  • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software


  • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

  • Local Candidates work between client site and office (Minneapolis).  Remote US must be willing to travel 20% for training and project kick-off.

Technical Leadership Qualifications


  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics


  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  


  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc


  • Expert programming experience in Java, Scala, or other statically typed programming language


  • Ability to learn new technologies in a quickly changing field


  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries


  • Excellent communication skills including proven experience working with key stakeholders and customers

Leadership


  • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics


  • Experience scoping activities on large scale, complex technology infrastructure projects


  • Customer relationship management including project escalations, and participating in executive steering meetings

  • Coaching and mentoring data or software engineers 
Share this job:
Senior Type-System Engineer
Luna  
java scala senior ux design Nov 03 2019

Senior Type-System Engineer
Luna is looking for a senior type-system engineer to help build the next generation interpreter and runtime for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you have strong technical skills and a passion for all things compiler, then this role could be the one for you.

As a type-system engineer you'll work as part of the compiler team to design and implement Luna's new type system, including its underlying theory, type-checker, and inference engine. This wok is _intrinsic_ to Luna's evolution, and will provide you with the opportunity to collaborate with a world-class team of engineers, community managers, and business developers (with experience at Bloomberg, GitHub, PayPal, to name a few), making your mark on Luna's future.

What You'll Do
As a senior type-system engineer, you'll be working on the design and development of Luna's new type-system, in conjunction with the rest of the compiler team, to help support the language's evolution. This will involve:

  • Determining and formalising the theoretical underpinnings of the new type system in a way as to ensure its soundness.
  • Both theoretical and practical treatments of the theory behind Luna's type system.
  • Working with the broader compiler team to implement the type-checking and type-inference engines as part of the greater interpreter.
  • Using the type-system's information to improve the interpreter's functionality and performance, as well as how it interacts with the users.

The Skills We're Looking For
We have a few particular skills that we're looking for in this role:

  • Practical and rich experience writing code in a functional programming language such as Haskell or Scala, including experience with type-level programming techniques (3+ years).
  • Experience working with the theory behind powerful type systems, including row types, type-checking and type-inference algorithms, and dependently-typed systems.
  • Practical experience building real-world type-systems, including facilities for both type-checking and inference.
  • An awareness of the UX impacts of type-systems, and a willingness to minimise their often-intrusive nature.
  • Practical experience in building large and complex software systems.

It would be a big bonus if you had:

  • Experience writing Java and Scala code, as these will be used to implement the type-system.
  • Experience in writing comprehensive regression tests for both type-inference and type-checking systems.

Avoid the confidence gap. You don't have to match all of the skills above to apply!

Who You'll Work With
You'll be joining a distributed, multi-disciplinary team that includes people with skills spanning from compiler development to data-science. Though you'll have your area to work on, our internal culture is one of collaboration and communication, and input is always welcomed.

We firmly believe that only by working together, rather than putting our team members in their own boxes, can we create the best version of Luna that can be.

The Details
As part of the Luna team you'd be able to work from anywhere, whether that be at home, or on the go! We have team members distributed across the world, from San Francisco, to London, to Kraków. We welcome remote work and flexible schedules, or you can work from the Kraków office (or our planned SF office) if you'd like. We can provide competitive compensation and holiday, as well as the possibility
of equity as time goes on.

How To Apply?
Send us an email at jobs@luna-lang.org, and tell us a little bit about yourself and why you think you'd be a good fit for the role! You can also tell us about:

  • Some of your past work or projects.
  • Why you'd like to work on Luna, and where you imagine Luna being in 5 years.
  • The most important features of a team that you'd like to work in.
  • Whether you take pride in your ability to communicate clearly and efficiently with your team.
Share this job:
Senior Software Engineer
python mysql scala senior postgresql frontend Oct 31 2019

Invite is a healthcare technology company that leverages genetic information to empower doctors and patients to make informed medical decisions. Our software engineers work on a variety of projects ranging from innovations in healthcare systems to taming the chaos of biology. We're constantly improving our tools and technologies to deliver the highest quality actionable information for patient health. If you want to apply your knowledge and skills to improve the lives of millions of people join our team.

About our team:

Invitae needs experienced engineers with diverse backgrounds to help us achieve our mission - provide genetic information to billions of people.  We are a cross-functional team of scientific domain experts and dedicated, curious engineers. We build systems that take massive amounts of genomic data, combine it with the world's scientific literature, add to it years of rigorously curated results, and package it all neatly for our scientists to consume. It's a lot of information. As the data gets bigger, our systems need to get better and faster. That's where you come in.

What you will do:

  • Help define and build new features or applications based on technology and business needs.
  • Minimum 5 years experience and Bachelors in Engineering
  • Write structured, tested, readable and maintainable code.
  • Participate in code reviews to ensure code quality and distribute knowledge.
  • Lead technical efforts for internal or external customer needs.
  • Support your teammates by continuing to learn and grow.

What you bring:

  • Industry experience with full stack architecture and distributed systems.
  • Multiple years of industry experience with backend or frontend frameworks such as:
    • Python/Django
    • JavaScript/React
    • Scala/Play
    • Other common industry standards
  • Hands-on experience with databases (MySQL, PostgreSQL, NoSQL, etc.).  Tuning and query optimization a plus.
  • Top-notch communication skills.  Experience with distributed teams is a plus.
  • A mission-oriented mindset and a desire to never stop learning.

At Invitae, we value diversity and provide equal employment opportunities (EEO) to all employees and applicants without regard to race, color, religion, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of the San Francisco Fair Chance Ordinance.

Share this job: