Remote scala Jobs

Yesterday

Senior Backend Engineer, Enterprise and Migrations
 
backend senior java python javascript scala Oct 22
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. All new and existing Atlassians will continue to work from home until it’s safe to return to our offices. When our offices re-open, every Atlassian will have the choice to work from an office or from home.

Atlassian is looking for a senior backend software engineer to join our Enterprise and Migrations team. You’ll be joining a team focused on building features for our enterprise-scale customers to enable better governance, trust, and security. Our team has a direct impact on the growth of Atlassian and is the proud owner of the Atlassian Access product. We are enabling cross-product experiences, and are committed to removing all blockers for adoption of cloud for enterprise customers.

More about you
As a senior backend software engineer on this team, you will work with a talented team of Product Managers, Designers, and Architects to build application-layer services encompassing backend development, monitoring, scaling and optimizing to make the administration of Atlassian products simple at Enterprise scale.

You will be empowered to drive innovation by coming up with new and exciting ideas to creatively solve issues, as well as actively look for opportunities to improve the design, interface, and architecture of Atlassian's products on the cloud.

On your first day, we'll expect you to have:

  • Bachelor's degree in Engineering, Computer Science, or equivalent
  • 5+ years of experience crafting and implementing highly scalable and performant RESTful micro-services
  • Proficiency in any modern object-oriented programming language (e.g., Java, Scala, Python, Javascript, etc.)
  • Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra)
  • Real passion for collaboration and strong interpersonal and communication skills
  • Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure)
  • Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality

It’s great, but not required, if you have:

  • Experience using AWS, Kubernetes and Docker containers
  • Familiarity with GraphQL, web application development and JavaScript frameworks (React, JQuery, Angular)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.

Learn more about Atlassian’s culture, interviewing flow, and hiring process by checking out our Candidate Resource Hub.
Share this job:

Last Week

Java Backend Engineer
Numbrs  
java spring aws kubernetes docker backend Oct 19

Numbrs Personal Finance AG is a technology company. Our product is Numbrs, a multibanking application available for iOS and Android. Numbrs is one of the most widely used banking apps in Germany and was recently launched in the UK.

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want. We are a team of professional, hard-working, supportive and entrepreneurial individuals with a passion for execution.

We are seeking professionals who can thrive in a fast-paced environment where attention to detail, excellent communication skills, and a talent for delivering out-of-the-box ideas are essential. Do you want to have a real impact on the future of the financial industry? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

Key Qualifications

  • a Bachelor's or higher degree in the technical field of study or equivalent practical experience
  • strong hands-on experience with Java (minimum 8 years)
  • experience with high volume production-grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: Home office from your domicile

Share this job:

This Month

Senior Scala Developer
scala postgresql elasticsearch apache-kafka senior saas Oct 10

We are looking for someone that is adept at writing and delivering quality software.  You will be working closely with the full team of eight full time engineers and the VP of Software Development.  Our back-end stack is functional - Scala & Haskell (we have begun the process of migrating our Haskell back-end to Scala), they are used to create services that run in AWS.  Typically our APIs are built with Finagle and Circe, and we haven't really landed on a preferred data access library, we've used Anorm, Scalike and Quill.  We try not to be dogmatic about our tooling and we juggle that with the benefits of consistency. The customer UI is a single page web-app written in TypeScript/Angular. On the infrastructure side we currently use PostgreSQL, ElasticSearch, Kafka, and DataDog.

We are looking for someone that can help us architect and implement the data flow in our microservice SaaS platform. In this role you will be establishing the data engineering practice, and will be expected to define and promote best practices for big data development at Signal Vine.  Further, you should have experience with a wide array of durable storage technologies and approaches - we are looking for someone that can understand our business goals and work with the team to design the appropriate way to collect & disseminate data across our platform to achieve them.

You…

  • Are a confident and capable Scala programmer
  • Can do self directed work and work as part of a team
  • Are intellectually honest 
  • Can express technical concepts to a non-technical audience 
  • Are trustworthy and conscientious 
  • Have excellent troubleshooting and problem solving skills
  • Have an analytical mindset - you form hypotheses and run tests to get answers
  • Understand the trade-off between purity of code and the practicality of business, that is, you are willing to make reasonable compromises to satisfy business objectives.
  • Are not a brilliant jerk

It’d be cool if you...

  • Have experience designing performant ElasticSearch indices
  • Have experience with Kafka, stream processing, and/or Haskell
  • Have experience building DataDog dashboards for application monitoring 
  • Have worked as a team lead before (officially or not)
  • Enjoy mentoring
  • Have worked on a scrum team
  • Know Unix well 
  • Have public examples of projects you’ve completed 
  • Have published technically relevant articles, blog posts or books 

We will...

  • Pay a competitive salary including equity and health insurance 
  • Provide a laptop  of your choice - we have a mix of Lenovo’s & Macbooks in our dev team
  • Respect your work schedule and habits by focusing on results 
  • Offer you a chance to go on an exciting ride as the company grows 

Attributes of Top Signal Vine Performers

  • Respectful and value the contributions of others
  • Humble enough to know it’s not all about you
  • Anticipates what’s next and prepares
  • Insightful to gain a complete understanding
  • Intuitive to not always need instructions
  • Detail-Oriented knowing the smallest details can be the most important
  • Compassionate to not only care for, but try to understand others

Your first few months

To make the onboarding process smooth, and give you a flavor of what you can expect, we have a set of goals & milestones to help get you up to speed @Signal Vine.

By (calendar) day 30

  • You will know our application architecture
  • You will know our build and deploy process
  • You will know our development workflow
  • You will have completed tickets in a sprint
  • You will learn and use our proprietary scripting language to onboard customers with our customer success team

By day 60

  • Your sprint velocity will increase
  • You will be contributing to discussions about how to build and architect features during our backlog review & pointing sessions

By day 90

  • You will be helping jr. developers solve issues
  • You will know all of the initiatives the product team and contractors are working on, and how they align w/the company vision
  • You will be discussing and planning technical strategy w/the VP of Software Engineering
Share this job:
Staff Data Engineer
Medium  
java python scala aws frontend api Sep 29
Medium’s mission is to help people deepen their understanding of the world and discover ideas that matter. We are building a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers. We are creating the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas.

We are looking for a Staff Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback.

What Will You Do

  • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
  • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
  • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
  • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
  • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
  • Analyze, debug and maintain critical data pipelines.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.

About You

  • You have 7+ years of software engineering experience.
  • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
  • You have outstanding coding and design skills, particularly in Java/Scala and Python.
  • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
  • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
  • You have a BS in Computer Science / Software Engineering or equivalent experience.
  • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
  • You have a strong understanding & usage of algorithms and data structures.

Nice To Have

  • Snowflake knowledge and experience
  • Looker knowledge and experience
  • Dimensional modeling skills
At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

Interested? We'd love to hear from you.
Share this job:
Data Analyst, Finance
finance golang python javascript scala data science Sep 28
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the role 

This is a fully remote role, we will consider applicants based in North America, South America and Europe 

We are looking for Data Analysts who will play a key role in driving decision making and work closely with the Finance team at Kraken. You will transform large datasets from complex systems, generate actionable insights and share the results with stakeholders at all levels of the company. You will partner and lead projects with finance teams, product management, engineering, and other enterprise level teams.  

You are a self-starter, results driven and passionate to drive success of products at Kraken using the power of data analytics.

Responsibilities

  • Conduct data analysis and generate actionable insights and make recommendations for improving, developing and launching products
  • Communicate key results with self-serve tools (dashboards, analytics tools) for your product teams and key stakeholders
  • Develop and automate reporting of key performance indicators of various Kraken’s products and services at scale, solving for business priorities
  • Partner with product managers to design experiments to test hypothesis and help with idea generation and refinement
  • Collaborate with engineering teams and stakeholders to build key datasets and data pipelines using Python/ETL frameworks
  • Be a data evangelist and help Kraken improve its products and services

Requirements

  • 3+ years industry experience in data analytics field solving problems in a finance related field
  • Advanced knowledge of SQL, scripting languages and database concepts
  • A consistent track record of performing data analysis using scripting language (Python, Javascript, etc) and/or experience with a programming language (Java, Scala, Golang, etc)
  • Strong understanding of statistical concepts
  • Experience with BI Software (Superset / Tableau / etc)
  • Familiarity with data warehouse development and best practices
  • The versatility and willingness to learn new technologies on the job
  • The ability to clearly communicate complex results to technical and non-technical audiences

Nice to have

  • Familiarity with cryptocurrency ecosystem
  • BA/BS or MA/MS degree in Mathematics, Statistics, Information Systems, Computer Science, Business Analytics, Data Science or related technical field
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Data Analyst, Fraud
golang python javascript scala data science big data Sep 28
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the role 

This is a fully remote role, we will consider applicants based in North America, South America and Europe 

We are looking for Data Analysts who will play a key role in improving risk management for core risk products at Kraken. You will transform large datasets from complex systems, generate actionable insights and share the results with stakeholders at all levels of the company. You will partner and lead projects with product management, engineering, risk, compliance and finance teams to influence global risk roadmap and processes.

You are a self-starter, results driven and passionate to drive success of products at Kraken using the power of data analytics.

Responsibilities

  • Conduct data analysis and generate actionable insights and make recommendations for improving risk metrics for Kraken’s products
  • Identify and investigate anomalies, escalate requests of risk related questions
  • Communicate key results with self-serve tools (dashboards, analytics tools) for your product teams and key stakeholders
  • Develop and automate reporting of key performance indicators of various Kraken’s products and services at scale, solving for business priorities
  • Partner with product managers to design experiments to test hypothesis and help with idea generation and refinement
  • Collaborate with engineering teams and stakeholders to build key datasets and data pipelines using Python/ETL frameworks
  • Be a data evangelist and help Kraken improve its products and services

Requirements

  • 3+ years industry experience in data analytics solving problems in risk related fields such as fraud, security, compliance
  • Advanced knowledge of SQL, scripting languages and database concepts
  • A consistent track record of performing data analysis using scripting language (Python, Javascript, etc) and/or experience with a programming language (Java, Scala, Golang, etc)
  • Experience with big data tools (Hadoop, Presto, Spark, Druid, etc)
  • Strong understanding of statistical concepts
  • Experience with BI Software (Superset / Tableau / etc)
  • Familiarity with data warehouse development and best practices
  • The versatility and willingness to learn new technologies on the job
  • The ability to clearly communicate complex results to technical and non-technical audiences

Nice to have

  • Familiarity with cryptocurrency ecosystem
  • BA/BS or MA/MS degree in Mathematics, Statistics, Information Systems, Computer Science, Business Analytics, Data Science or related technical field
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Data Analyst
golang python javascript scala data science big data Sep 28
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the role 

This is a fully remote role, we will consider applicants based in North America, South America and Europe 

We are looking for Data Analysts who will play a key role in driving decision making in building cutting-edge products at Kraken. You will transform large datasets from complex systems, generate actionable insights and share the results with stakeholders at all levels of the company. You will partner and lead projects with product management, engineering, client engagement, finance and other enterprise level teams.  

You are a self-starter, results driven and passionate to drive success of products at Kraken using the power of data analytics.

Responsibilities

  • Conduct data analysis and generate actionable insights and make recommendations for improving, developing and launching products
  • Communicate key results with self-serve tools (dashboards, analytics tools) for your product teams and key stakeholders
  • Develop and automate reporting of key performance indicators of various Kraken’s products and services at scale, solving for business priorities
  • Partner with product managers to design experiments to test hypothesis and help with idea generation and refinement
  • Collaborate with engineering teams and relevant stakeholders to build key datasets and data pipelines using Python/ETL frameworks
  • Be a data evangelist and help Kraken improve its products and services

Requirements

  • 3+ years industry experience in data analytics field
  • Advanced knowledge of SQL, scripting languages and database concepts
  • A consistent track record of performing data analysis using scripting language (Python, Javascript, etc) and/or experience with a programming language (Java, Scala, Golang, etc)
  • Strong understanding of statistical conceptsExperience with BI Software (Superset / Tableau / etc)
  • Experience with big data tools (Hadoop, Presto, Spark, Druid, etc)
  • Familiarity with data warehouse development and best practices
  • The versatility and willingness to learn new technologies on the job
  • The ability to clearly communicate complex results to technical and non-technical audiences

Nice to have

  • Familiarity with cryptocurrency ecosystem
  • BA/BS or MA/MS degree in Mathematics, Statistics, Information Systems, Computer Science, Business Analytics, Data Science or related technical field
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Full Stack Entwickler
go hibernate java python groovy full stack Sep 28

Was wir tun

Als selbstorganisierte, agile Teams arbeiten wir an der Softwareentwicklung für Luftfracht, Seefracht, Handling und Zoll sowie den dazu gehörigen Schnittstellen zu externen Systemen.

Zu den vielfältigen, anspruchsvollen Aufgaben des Core Teams zählen


  • Betreuung und Erweiterung der Grundkomponenten von Scope wie

    • Laufzeitumgebung für die Clientsoftware
    • Auslieferungsmechanismen für automatische Softwareupdates
    • GUI-Komponenten (Swing, JGoodies)
    • Persistenzschicht (Hibernate, MySQL)
    • Speicherung von Dokumenten (MongoDB)

  • Migration vom Monolithen in separate Services für existierende und neue Features

    • Kommunikation mit externen Partnern (Zoll, Airlines, Häfen, etc.)
    • Versand und Empfang von Emails
    • Druck von Dokumenten
    • Automatische Aggregation von Laufzeitfehlern

  • Entwicklung von intern genutzten Tools

    • Werkzeuge zur Datenbankmigration
    • Sofortige Vorschau von Template-Änderungen

  • Paketierung und Auslieferung von Software

    • Delivery Pipelines (Jenkins, podman)
    • Container-basierte Infrastruktur (podman, Ansible)

  • Beratung anderer Teams

    • zu o.g. Tätigkeitsbereichen
    • Wartbarkeit und Testbarkeit
    • Softwarearchitektur
    • Performance

Für die genannten Aufgabenbereiche verwenden wir eine Vielzahl von Sprachen und Tools, z.B.:

  • Sprachen: Java, Groovy, Scala, Go, Python, Bash, JavaScript, HTML/CSS
  • Infrastruktur: WildFly, Jenkins, Ansible, podman
  • Tests: JUnit, Spock, test (Go), unittest (Python)
  • Tools: Git, Gradle, Docker
  • IDE: IntelliJ, Eclipse, Goland, VSCode

Was wir erwarten

  • die Bereitschaft, sich kurzfristig neues Wissen anzueignen, um ein spezifisches Problem zu lösen
  • die Gewohnheit, sich kontinuierlich weiterzuentwickeln und selbstständig nach innovativen Lösungen und Verbesserungsmöglichkeiten zu suchen
  • gute Englischkenntnisse
  • eine eigenverantwortliche, strukturierte und zielorientierte Arbeitsweise

Darüber hinaus sollten Ihre Vorkenntnisse mindestens einem der folgenden Profile entsprechen.

1. Enterprise-Java-Entwickler

Sie fühlen sich im Java-Ökosystem zu Hause. Ob Monolith oder Microservice, JEE ist Ihr Fachgebiet.

  • sicherer Umgang mit der Sprache und Laufzeitumgebung
  • Erfahrung mit JEE (Application Server, Hibernate, JMS)
  • Erfahrung mit Profiling/Heapdump-Analyse
  • MySQL-Kenntnisse

2. Go-Entwickler mit Web-Erfahrung

Schlanke und schnelle Software ist Ihr Favorit. Sie mögen simple, wartbare Lösungen und achten auf Speicher- und Netzwerkprofile Ihrer Anwendung.

  • sicherer Umgang mit Go und dem Ökosystem
  • gutes Verständnis von Computernetzwerken und Netzwerkprotokollen (z.B. TCP/IP, HTTP, DNS)
  • Bewusstsein für Speicherverbrauch und Performance von Software
  • Erfahrung mit HTML/CSS/JavaScript

3. Polyglot-Programmierer

Sie lernen gerne neue Sprachen und fühlen sich bereits in mehreren zu Hause. Sie sind insbesondere an den Unterschieden zwischen Laufzeitumgebungen und Ökosystemen der Sprachen interessiert und arbeiten gerne an der Integration verschiedener Technologien.

  • Kenntnisse in mehreren genannten Programmiersprachen
  • Gutes Verständnis der unterschiedlichen Laufzeitumgebungen
  • Grundkenntnisse verschiedener Betriebssysteme
  • Interesse an Auslieferungsprozessen (Build, Test, Package, Deploy)
Share this job:
Senior Cyber Data Engineer
senior golang java scala machine learning cloud Sep 24
IronNet is looking for a multiple Senior Data Engineers to join their passionate small business headquartered in Tysons Corner, VA but operating completely remote! Founded in 2014, IronNet launched their core product suites to help organizations collectively strengthen cybersecurity defense against highly sophisticated adversaries, across all borders and sectors. You will be part of IronNet’s Analytics team, focusing on developing cost effective cloud solutions that are distributed and highly scalable, processing large volumes of events.
The ideal candidates for this role will need deep Scala/Spark implementation and performance tuning abilities as they will be deep in analytic code. Working in a highly collaborative environment, they will use AWS to provide microservices architectures that are highly reliable, redundant and scalable and will have the ability to understand, improve, and potentially re-factor ETL and data analytics software written in Spark.

Does This Describe You?

  • You have at least 4+ years of experience as a Data Engineer, Machine Learning Engineer, Software Engineer, Cloud Engineer or similar role.
  • You have at least 2+ years of Scala and Spark experience
  • You have experience using modern programming languages (Python, Scala, Java, Golang, etc.) to develop continuously integrated and deployed production software
  • You have experience with Kafka, HDFS, AWS EMR and Glue, and other scalable data frameworks
  • You have experience with cybersecurity event processing including high-volume ETL and analytics
If you are interested in learning more about this company or any Startups/Small Businesses in the area, please contact us and check us out here!! 
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status

Share this job:

This Year

Software Engineering Manager at Jack Henry & Associates, Inc.
api azure scala apache-kafka rust manager Sep 19

Summary

Banno is redefining the relationship between forward-thinking financial institutions and their customers. Banno leverages modern technology and an industry leading product vision to make user-friendly mobile and web apps. We have a first-class API; the same API our web and mobile apps use is the same API that is open to the community to build on top of and extend.

About You

You are curious and thrive in an environment where you are constantly learning and growing. You want to be somewhere that you are trusted and set up for success. You want to work in a collaborative environment of diverse perspectives with peers who inspire you to be better everyday.

As a manager, you care as much about the overall production of the team as you do their shared culture and well-being. You strive to set the team up for success by helping them overcome potential obstacles and communicate their needs to the appropriate channels. You’ll work with the team to identify their strengths and weaknesses, and enable processes to support their unique working style without compromising their efficiency (a little rudder far from the rocks).

Banno values trust and those with a bias towards action, and as a manager you’ll work to instill these beliefs within your team. We have a remote-first culture that challenges you each day and supports your growth. We are confident you will love it here.

What you and your team are working on

The Engineering Manager is responsible for balancing the team’s work between cross-functional strategic projects. You will support and develop the engineers on your team by providing advice, coaching, and educational opportunities, as well as propose and drive processes to facilitate and promote communication, transparency, and trust. Together with the team, you will support delivery by providing estimations, context, and clarity to ensure the team delivers the agreed work with quality and excellence.

Your team is responsible for providing services that support our mobile and online banking applications. Our APIs are first-class in nature and are consumed by both our internal teams as well as teams outside of Banno. The engineers on your team are constantly keeping our apps up-to-date with the newest development and deployment practices to offer exciting user experiences for highly secured data.

About the Banno team

We are committed to creativity, thoughtfulness, culture, and openness—the perfect place to make phenomenal products that have a big impact. Our work environment echoes a modern tech startup but we have the security and benefits of a large, publicly traded company. We place high value on continuing education and contribution to, and use of, open source projects.

Our team is distributed and that means you can live and work wherever you want within the US. You’ll get the chance to choose your own tools, work with modern web technologies across the stack, and contribute to products that are used by millions of people.


Minimum Qualifications

  • 8 years of programming experience, with the majority being in a server-side language.
  • 2 years supporting services in a production environment.
  • 2 years experience leading and managing software engineers.
  • Experience working on or with a geographically-distributed team.

Bonus Points

  • Bachelor's degree
  • Experience working on a team with a CI/CD process
  • Familiarity using collaboration tools like Jira to communicate and plan team’s work
  • Experience working with multiple teams and stakeholders to establish roadmap prioritization
  • Successfully lead definition, development, and delivery of a large cross-team project with broad scope and high-value business implications
  • Ability to identify and utilize each individual team members strengths, benefiting the greater organization
  • Strong written and verbal communication skills
  • Strong organizational skills and ability to work independently
  • Familiarity with functional programming concepts
  • Familiarity with stream processing concepts
Share this job:
Senior Software Engineer, Backend
Numbrs  
aws kubernetes spring-boot apache-kafka java backend Sep 17

Numbrs Personal Finance AG is a technology company. Our product is Numbrs, a multibanking application available for iOS and Android. Numbrs is one of the most widely used banking apps in Germany and was recently launched in the UK.

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our Engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

Key Qualifications

  • a Bachelor's or higher degree in the technical field of study or equivalent practical experience
  • experience with high volume production-grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Senior Database Reliability Engineer - FoundationDB
Cognite  
senior golang java python scala saas Sep 10

Want to help us bring our fundamental data stores to multiple clouds - public and private?

About Cognite:

Cognite AS is a global industrial Software-as-a-Service (SaaS) company enabling the full-scale digital transformation of heavy-asset industries. Our core software product, Cognite Data Fusion (CDF), powers companies with contextualized OT/IT data to develop and scale solutions that increase safety, sustainability, efficiency, and drive revenue.

About the Database Reliability Engineering Team Cognite’s Cognite Data Fusion contextualizes operational data at scale, enabling asset-intensive industries to make data-driven decisions. Our platform is built on many different technologies, each good at solving different problems. Some of these are absolutely fundamental, and the Database Reliability Engineering team will be responsible for the continuous well-being of our portfolio of FoundationDB, PostgreSQL, Elasticsearch and Kafka clusters, some of which we expect to have thousands of in the years to come – in both public and private clouds, through managed services and on self-managed Kubernetes clusters. Even when using mature as-a-Service offerings and Kubernetes operators, there are many things that can and will go wrong. Herding clusters that need upgrading, upscaling, cost-trimming, and recovery etc., while continuously serving heavy workloads with tight SLOs requires solid reliability engineering.

About our Tech stack:

We work with open source technologies that need to run in multiple cloud environments – both public clouds (like Google Cloud Platform and Azure) and in private clouds with customer provided Kubernetes.

Managed Kubernetes (GKE, AKS, Openshift) forms the base that we build our products on top of. To prove the market we initially built on PaaS offerings to store state, such as Google Bigtable, Spanner and Pubsub. We replicate data to different storage systems to be able to answer different types of queries. As we diversify the platforms our offering runs on, we are migrating to a self-run Foundation DB based scale-out data store for managing time series data. PostgreSQL and Elasticsearch are also important examples.

Our backend developer teams work with Java, Scala, Python, and Rust. CI/CD is handled by a combination of Github, Jenkins, and Spinnaker to test and deploy code to production. The infrastructure is managed as code with Terraform and Atlantis and services are monitored using Prometheus, Grafana and Lightstep.

As we are establishing a team in Database Reliability Engineering we are looking to hire two people to work on FoundationDB. We are looking for senior or principal engineers, who either know FoundationDB, or have experience with other high performance distributed databases and an interest and willingness to dive deep and learn.

The FoundationDB Kubernetes Operator is written in Golang, and FoundationDB itself is written in Flow, an Actor system that preprocesses C++ code.

About the job to be done:

  • Join Cognite’s DBRE team as a FoundationDB sub-team, owning the full cluster lifecycle of all of our FoundationDB clusters.
  • Work with both public clouds and on private Kubernetes deployments.
  • Establish robust reliability engineering to support these clusters, managing aspects like monitoring, chaos testing, alerting, on-call rotations, internal best-practices education, and capacity forecasting.
  • Enable product teams to focus on using the databases, and not on running them – but deeply engage them to make sure the products are operable at scale.

About you:

  • A master degree in Computer Science or a similar amount of experience.
  • Broad experience with DevOps practices such as CI/CD and Infrastructure as code
  • Experience with large Cloud deployments on any of AWS, GCP, or Azure.
  • Familiar with C++, Golang or other programming languages.
  • 2+ years of direct FoundationDB operational experience or
  • 6+ years of Linux operations experience.
  • 2+ years working with similar distributed systems
  • Familiarity and experience with our tech stack is beneficial.

What we offer you:

  • An opportunity to make an impact on the industrial future and be part of disruptive and groundbreaking projects
  • In-depth exposure to FoundationDB, a modern cloud-scale distributed datastore
  • Help to relocate to Norway
  • Competitive salary and benefits (including pension plans, insurance, benefits and more)
  • IT equipment and tools to allow you to be productive
  • Coverage of mobile telephone subscription and broadband connection
  • Extended private health services and free yearly health check
  • Free snacks and drinks throughout the day, to keep you running
  • Subsidized lunch at the canteen, with various food options
  • Free staffed gym
  • Social activities (book club, team sports activities - football, boxing, regular Cognite social events)
  • Free Norwegian courses for levels A1 - B1

Equal opportunities Cognite is committed to creating a diverse and inclusive environment and is proud to be an equal opportunity employer. Embracing diversity and inclusion means that all qualified applicants will receive the same level of consideration for employment, training, compensation, and promotion. We are following up on equal assessment in the recruitment process, and that is why we ask for gender when you apply. Answering the question is kindly requested, however, it is not mandatory and it will not affect in any way your application assessment.

Other information: Application deadline: ASAP

Share this job:
Engineering Senior Associate
 
senior java python javascript scala saas Sep 08
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. All new and existing Atlassians will continue to work from home until it’s safe to return to our offices. When our offices re-open, every Atlassian will have the choice to work from an office or from home.

JOB DUTIES:BUILD FEATURES FOR ENTERPRISE-SCALE CUSTOMERS TO ENABLE BETTER GOVERNANCE, TRUST, AND SECURITY. DESIGN, CREATE AND OPERATE HIGH-PERFORMANCE RESTFUL MICRO-SERVICES THAT ARE USED BY INTERNAL AND EXTERNAL USERS. WORK WITH DATABASE TECHNOLOGY, SUCH AS RDBMS LIKE ORACLE OR POSTGRES AND/OR NOSQL LIKE DYNAMODB OR CASSANDRA TO INSTRUMENT MONITOR AND PERFORMANCE TEST SYSTEMS. UTILIZE STATE OF THE ART TOOLS AND TECHNOLOGIES INCLUDING BUT NOT LIMITED TO MODERN OBJECT-ORIENTED PROGRAMMING LANGUAGES, E.G. JAVA, SCALA, PYTHON, JAVASCRIPT, ETC. AND  SAAS, PAAS, AND/OR IAAS TOOLS FOR PUBLIC CLOUD OFFERINGS, SUCH AS AWS, GAE, AZURE. COLLABORATE WITH OTHER DEVELOPERS TO WRITE CODE FOR PROJECTS AND DELIVER RESULTS THAT MEET THE USERS’ NEEDS.

MINIMUM REQUIREMENTS:MASTER'S DEGREE IN COMPUTER SCIENCE, ENGINEERING, OR A CLOSELY RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE AS SOFTWARE ENGINEER RESPONSIBLE FOR CRAFTING HIGH-PERFORMANCE RESTFUL MICRO-SERVICES, WORKING WITH ANY DATABASE TECHNOLOGY (E.G. RDBMS LIKE ORACLE OR POSTGRES AND/OR NOSQL LIKE DYNAMODB OR CASSANDRA) TO INSTRUMENT MONITOR AND PERFORMANCE TEST SYSTEMS, AND UTILIZING TOOLS AND TECHNOLOGIES INCLUDING BUT NOT LIMITED TO MODERN OBJECT-ORIENTED PROGRAMMING LANGUAGES, E.G. JAVA, SCALA, PYTHON, JAVASCRIPT, ETC. AND  SAAS, PAAS, AND/OR IAAS TOOLS FOR PUBLIC CLOUD OFFERINGS, SUCH AS AWS, GAE, AZURE.  

ALTERNATE REQUIREMENTS:BACHELOR'S DEGREE IN COMPUTER SCIENCE, ENGINEERING, OR A CLOSELY RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF PROGRESSIVE EXPERIENCE AS SOFTWARE ENGINEER RESPONSIBLE FOR CRAFTING HIGH-PERFORMANCE RESTFUL MICRO-SERVICES, WORKING WITH ANY DATABASE TECHNOLOGY (E.G. RDBMS LIKE ORACLE OR POSTGRES AND/OR NOSQL LIKE DYNAMODB OR CASSANDRA) TO INSTRUMENT MONITOR AND PERFORMANCE TEST SYSTEMS, AND UTILIZING TOOLS AND TECHNOLOGIES INCLUDING BUT NOT LIMITED TO MODERN OBJECT-ORIENTED PROGRAMMING LANGUAGES, E.G. JAVA, SCALA, PYTHON, JAVASCRIPT, ETC. AND  SAAS, PAAS, AND/OR IAAS TOOLS FOR PUBLIC CLOUD OFFERINGS, SUCH AS AWS, GAE, AZURE.  

SPECIAL REQUIREMENTS:MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Experienced Backend Engineer - Java
java spring-boot apache-kafka kubernetes postgresql backend Sep 06

WHO WE ARE

Founded in 2007, Theorem LLC (formerly Citrusbyte) is a remote-first, fully distributed, technology consulting company. Our customers, F1000's and startups alike, come to us because they need to change how something is done in order to succeed and they're looking for a solution that isn’t just about technology — but also people, process and leadership. We consult, form a diverse team of experts, and deliver strategy and execution all under one roof. Our services range from new product development, pure R&D, legacy modernization, pricing strategy development, revenue generation, process optimization, to organizational transformation and cultural design. Learn more about Theorem LLC at: theorem.co

WHAT YOU’LL DO

As Experienced Backend Engineer you will build greenfield and brownfield, scalable web platforms. In addition to providing input on app architecture, you will create and review pull requests daily and engage with internal teams and directly with clients with an agile environment. You will work in small teams with a product manager, designers and other engineers to scope design and implement features.

WHO YOU ARE

You are a passionate technologist with the discipline to create and finish projects. You have opinions about your favorite open source projects, or perhaps you have contributed to some or started one of your own.  You are a communicator. Whether you are raising the flag within an implementation or sharing your favorite playlist in Slack, you will not shy away from letting your voice be heard.  

We are only considering applicants within the United States or Canada in order to satisfy security and background check requirements.

RESPONSIBILITIES AND DUTIES

  • Design and build scalable enterprise web applications using Modern Java, Spring boot environment
  • Engage daily with your distributed, remote engineering team working on different layers of the infrastructure
  • Collaborate with product designers and clients to clarify requirements, prototype functionality and build products
  • Plan, estimate and prioritize work in a remote, distributed team environment
  • Guide our clients in best practices in order to achieve great project success

QUALIFICATION AND SKILLS

  • 5 + years of professional experience
  • Strong server side development experience utilizing Java and Spring Boot
  • Good knowledge of design and architectural patterns and development best practices
  • Previous experience building scalable creating microservices is required
  • Strong RDBMS experience (PostgreSQL specific features is a plus) is required
  • Messaging experience with technologies like Kafka, Kineses, RabbitMQ, or similar is a must. Kafka is preferred.
  • Experience deploying to containerized environments such a Docker and Kubernetes is also a strong plus
  • Demonstrated proficiency with server side development using 2 or more programming languages such as Typescript, Scala, Python, Go or Rust, in addition to what is required for this role is highly preferred
  • Strong hands on experience using Git is required
  • Previous professional experience with cloud providers is required. 
  • Previous remote work experience is required
  • Previous experience collaborating on highly distributed teams
  • Previous consulting experience is a strong plus
  • Experience working within an Agile/Scrum environment.
  • Possess a strong and reliable internet connection.

YOU WILL BE SUCCESSFUL WHEN

  • You are able to hit the ground running within your area of expertise and are not afraid of challenges outside of it
  • Thrive on collaborating with other team members and across disciplines
  • You are happy to receive feedback and see it as an opportunity for reflection and improvement 
  • You are always learning - Whether you are learning to help your team and customer work through new concepts and technology 
  • You are astute, you know when to push an issue and when to let things lie
  • You are friendly - you reach out to your teammates even if they aren’t on your project team
  • You can work through ambiguity - you aren’t shy about asking questions to gain clarity
  • Entrepreneurial and intrapreneurial - always on the lookout for new opportunities to improve the company externally and internally
Share this job:
Lead Software Engineer, Scala Remote
scala node-js ruby saas cloud cs Aug 16

BigCommerce is disrupting the e-commerce industry as the SaaS leader for fast- growing, mid-market businesses. We enable our customers to build intuitive and engaging stores at a fraction of the cost to support every stage of their growth.

BigCommerce is looking for a Lead Software Engineer, Scala for our Storefront Team. This growing team is looking for an engineer with strong technical experience who will focus on improving the performance, reliability, and features of our Storefront Platform. We use technologies like Scala, Node.js, Ruby, gRPC, Linkerd, Docker and Nomad to build one of the fastest and most reliable ecommerce platforms in the world.

BigCommerce offers a dynamic, collaborative environment, helping you expand your skills and take ideas from inception to delivery.

What You’ll Do:

  • Build highly-available, high-performance, redundant, and scalable distributed systems in a polyglot environment
  • Write code that is high-performance, maintainable, clear, and concise
  • Work closely with operations and infrastructure to improve the architecture while building and scaling back-end services
  • Build new features with a focus on testability, extensibility, and maintainability

Who You Are

  • 6+ years of professional experience as a Software Engineer
  • Bachelor's degree in CS, EE, CE or MIS; or equivalent experience
  • 3+ years of Scala in a high traffic production environment, including performance monitoring and tuning
  • Drive our technical roadmap and direction of our stack
  • Collaborate with stakeholders, pod leaders and other teams to promote communication and collaboration.
  • Participate in code reviews and coach the team to enforce best practices.
  • Write code that is performant, fault-tolerant, maintainable, testable and concise
  • Help design/architect/execute on building new microservices
  • Professional experience with PHP, Ruby, NodeJS a strong plus
  • Knowledge of object-oriented and functional programming techniques
  • Experience monitoring and operating microservices
  • Knowledge of highly scalable architectures
  • Strong desire to learn new languages, frameworks, and design patterns
  • Familiarity with agile methodologies and ticketing systems such as JIRA
  • Experience with SaaS platforms or Cloud Computing

Our Hiring Processes Might Include

We want to see your problem-solving and analytical skills. Be prepared to write good, clean, scalable code. You don’t need to know our entire stack, but we’re looking for practical experience, someone who can solve production problems in the cloud.

  • Recruiter Phone Screen
  • Hiring Manager Screening
  • Online Code Assessment
  • On-site Interview

Note: Candidates only located in the following states can be considered for REMOTE WORK; Alabama, Arizona, California, California -LA County, Colorado, Nebraska, Nevada, North Dakota, Texas, Utah and Washington

Diversity & Inclusion at BigCommerce

We have the opportunity to build not only a great business, but a great company, with soul. Our beliefs and commitment to diversity and inclusion are a central part of achieving that.

Our dedication to diversity and inclusion is grounded in two things: a moral belief in the dignity, value, and potential of every individual, and a practical belief that diverse, inclusive teams will create the best outcomes for our customers, partners, employees, and company. We welcome everyone to be a part of our journey.

Current BigCommerce Employees: Please use the internal job board to apply for openings

Share this job:
Sr. Data Scientist
python scala data science machine learning big data testing Aug 06

Senior Data Scientist @ NinthDecimal.

NinthDecimal (www.ninthdecimal.com) provides location-based intelligence to help advertisers plan, manage, measure, and optimize multi-platform cross-media campaigns to drive customer and revenue growth. As an industry leader in the AdTech & MarTech space, NinthDecimal delivers best-in-class measurement, insights, and analytics by deploying patented big data methodologies on a cutting-edge technology platform.

Our LocationGraph™ platform processes data on a massive scale, converting tens of billions of signals per day into accurate and actionable insights for our clients. We provide location-based intelligence services for top brands across industry verticals including retail, travel, leisure & entertainment, fast food & casual dining, telecommunications, and automotive.

As a member of the Data Science team, you’ll be responsible for developing statistical and machine-learning models that deliver accurate and robust measurement metrics of interest to our advertising clients. You will work closely with other data scientists, data analysts, product & engineering teams, and other business units. This is a great opportunity to work with real world data at scale and to help define and shape the measurement standards in a very dynamic and evolving industry.

Responsibilities:

  • Develop & deploy statistical & machine learning models at scale to create high quality disruptive products
  • Contribute to our growing portfolio of data science and technology patents
  • Establish robust processes to insure the accuracy, stability, reproducibility, and overall quality of all data, algorithms, and the results they produce.
  • Represent Data Science team in product and roadmap design sessions
  • Participate in building reliable QA processes for both data and results
  • Collaborate on key architectural decisions and design considerations
  • Contribute to and promote good software engineering practices across the Engineering Department.
  • Understand the current data sets and models and provide thought leadership by discovering new ways to enrich and use our massive data assets

Qualifications Required:

  • A true passion for data, data quality, research and a solid data science approach
  • Masters or Ph.D.in Statistics, Economics, Operations Research, or similar quantitative field
  • At least 5 to 10 years of professional experience with clear career progression and demonstrated success at developing models that drive business value
  • Excellent communication skills and the ability to present methodologies and findings to audiences with varying technical background
  • Solid understanding of probability and statistics
  • Solid understanding of research design, A/B and test-vs-control statistical testing frameworks
  • Solid understanding of unsupervised and supervised machine learning approaches including clustering and classification techniques.
  • Experience in building Machine Learning models (GLM, SVM, Bayesian Methods, Tree Based Methods, Neural Networks)
  • Solid understanding of how to assess the quality of machine learning models – including the ability to tune and optimize models and to diagnose and correct problems.
  • Experience working with multiple data types including numerical, categorical, and count data.
  • A driven leader, able to manage competing priorities and drive projects forward in a dynamic and fast paced business environment.
  • Experienced/Advanced programmer in Scala, Python, or similar programming languages
  • Experienced/Advanced Programmer in Spark, SQL, and Hadoop
  • Experience in developing algorithms and building models based on TB-scale data
  • Familiarity with the digital media / advertising industry is a big plus
Share this job:
Software Engineer
indi  
python aws scala rest java cloud Jul 24

Background

At numo, we incubate new “fintech” companies.  Our flagship product, indi, is growing rapidly and we are seeking a full stack software engineers to join our development team.

The Job

Here’s what you’ll be working on:

indi is one of a kind digital banking product targeted at self-employed customers who are part of the rapidly growing gig-economy space. We are building a product to address the challenges faced by those customers in a unique way.

Job Responsibilities:

  • Be an integral part of the development team across our technology stack which includes Scala, Python, Flutter, Dart and is hosted on AWS.
  • Be willing to learn new technologies. If you are not familiar with some or all of our tech stack, we will be happy to help you ramp up.
  • Focus on creating software that is scalable, robust, testable, easy to maintain and easily deployed

We are looking for:

  • Real world experience building products. Ideally, at least 5+ years. 
  • Expertise in modern architectures (e.g., micro services, event-based, map-reduce, etc.)
  • Experience with deploying and developing for cloud environments (AWS)
  • Familiarity with modern open source thinking and tools (git, continuous builds, continuous deployment, containers, dev ops, Jenkins, Docker)
  • Desire to build and be part of a fun, high-functioning team
  • A computer science degree is desired, but not required if you have real-world experience

What numo offers

  • Competitive salary
  • Opportunity to own equity in indi
  • Cool office space at East Liberty
  • Great benefits
Share this job:
Solutions Architect - Toronto
java python scala big data linux cloud Jul 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Toronto with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Backend Software Engineer, Enterprise & Migrations
 
backend java python javascript scala saas Jul 01
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a backend software engineer to join our Enterprise and Migrations team. You’ll be joining a team focused on building features for our enterprise-scale customers to enable better governance, trust, and security. Our team has a direct impact on the growth of Atlassian and is the proud owner of the Atlassian Access product. We are enabling cross-product experiences, and are committed to removing all blockers for adoption of cloud for enterprise customers.

More about you
As a backend software engineer on this team, you will work with a talented team of Product Managers, Designers, and Architects to build application-layer services encompassing backend development, monitoring, scaling and optimizing to make the administration of Atlassian products simple at Enterprise scale.

You will be empowered to drive innovation by coming up with new and exciting ideas to creatively solve issues, as well as actively look for opportunities to improve the design, interface, and architecture of Atlassian's products on the cloud.

On your first day, we'll expect you to have:

  • Bachelor's degree in Engineering, Computer Science, or equivalent
  • Experience crafting and implementing highly scalable and performant RESTful micro-services
  • Proficiency in any modern object-oriented programming language (e.g., Java, Scala, Python, Javascript, etc.)
  • Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra)
  • Real passion for collaboration and strong interpersonal and communication skills
  • Broad knowledge and understanding of SaaS, PaaS, IaaS industry with hands-on experience of public cloud offerings (AWS, GAE, Azure)
  • Familiarity with cloud architecture patterns and an engineering discipline to produce software with quality

It’s great, but not required, if you have:

  • Experience using AWS, Kubernetes and Docker containers
  • Familiarity with GraphQL, web application development and JavaScript frameworks (React, JQuery, Angular)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Solutions Architect - West Coast
java python scala big data linux cloud Jul 01
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Anywhere in West Coast, USA
You will be based in LOCATION, with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Product Security Engineer
 
java python scala testing Jun 22
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.


JOB DUTIES:    

ENSURE SECURITY (CONFIDENTIALITY, INTEGRITY, AND AVAILABILITY) OF COMPANY'S INTERNAL SOFTWARE SERVICES AND EXTERNAL SOFTWARE PRODUCTS. PRACTICE THREAT MODELING, ARCHITECTURE/DESIGN REVIEW, STATIC ANALYSIS, AND PENETRATION TESTING TO ACHIEVE THESE OBJECTIVES. DESIGN REVIEWS, CODE REVIEWS, AND THREAT MODELING. WORK CLOSELY WITH DEVELOPMENT TEAMS AT EACH STAGE OF THE SOFTWARE DEVELOPMENT LIFECYCLE TO INCORPORATE SECURE DESIGN, DELIVER SECURE CODE, IDENTIFY VULNERABILITIES, AND DELIVER REMEDIATION. SERVE AS SUBJECT MATTER EXPERT FOR ANY CLIENT COMPANY WITH SECURITY QUESTIONS. WORK WITH COMPANY'S SUPPORT TEAMS TO ADDRESS CUSTOMER SECURITY CONCERNS AND REPORTS. WRITE AUTOMATION TO CONTINUOUSLY TEST COMPANY'S PRODUCTS/INFRASTRUCTURE AND IDENTIFY NEW VULNERABILITIES AND TO ALLOW THE SECURITY TEAM TO FUNCTION MORE EFFICIENTLY. COLLABORATE CLOSELY WITH ALL ENGINEERING GROUPS. WORK IN CONJUNCTION WITH THE SECURITY INTELLIGENCE TEAM TO INVESTIGATE THE ROOT CAUSE OF SECURITY INCIDENTS. RECEIVE, TRIAGE, AND RESPOND TO VULNERABILITY REPORTS FROM THE PUBLIC AND VIA COMPANY'S BUG BOUNTY. WRITE NEW CODE PRIMARILY UTILIZING JAVA OR PYTHON TO PRODUCE UNIQUE AND PROPRIETARY SOFTWARE. PERFORM SOURCE CODE AUDITING FOR JAVA, SCALA, AND PYTHON LANGUAGES, COMPLETE WEB SCANNING, AND UTILIZE CUSTOM AND COMMERCIAL TOOLS. CONDUCT INDEPENDENT RESEARCH RELATED TO SECURITY ENGINEERING.

MINIMUM REQUIREMENTS:

MASTER’S DEGREE IN COMPUTER SCIENCE, COMPUTER ENGINEERING, INFORMATION SECURITY OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE IN INFORMATION SECURITY CONSULTING, SECURITY ENGINEERING, APPLICATION SECURITY ENGINEERING, PRODUCT SECURITY ENGINEERING OR SECURITY FOCUSED DEVELOPMENT AT SOFTWARE COMPANIES.

ALTERNATE REQUIREMENTS:

BACHELOR’S DEGREE IN COMPUTER SCIENCE, COMPUTER ENGINEERING, INFORMATION SECURITY OR RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF EXPERIENCE IN INFORMATION SECURITY CONSULTING, SECURITY ENGINEERING, APPLICATION SECURITY ENGINEERING, PRODUCT SECURITY ENGINEERING OR SECURITY FOCUSED DEVELOPMENT AT SOFTWARE COMPANIES.

SPECIAL REQUIREMENTS:

MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Software Engineer
 
senior python scala cloud Jun 22
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES:
AS A MEMBER OF THE IDENTITY TEAM, BUILD WORLD-CLASS IDENTITY AND ACCESS MANAGEMENT SOLUTIONS FOR ATLASSIAN PRODUCTS. DRIVE THE TECHNICAL DIRECTION AND IMPLEMENTATION ACROSS IDENTITY, PRIVACY, AND ACCESS CONTROL TO ENSURE ATLASSIAN PRODUCTS REMAIN TRUSTWORTHY FOR ITS CUSTOMERS. USE STRONG ARCHITECTURE AND TECHNICAL PROCESS KNOWLEDGE AND HANDS ON CODING ABILITY TO DEVELOP AND IMPLEMENT NEW SECURITY AND IDENTIFY OR SPAM FEATURES AND IDENTIFY NEW MANAGEMENT SOLUTIONS AT SCALE. DESIGN, IMPLEMENT AND LAUNCH HIGHLY SECURE HIGH-PERFORMANCE RESTFUL MICROSERVICES IN A PUBLIC CLOUD INFRASTRUCTURE. BUILD TECHNOLOGICAL INFRASTRUCTURE AND SCALE PRODUCTS WHILE UTILIZING PROFESSIONAL KNOWLEDGE AND EXPERIENCE WITH MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES. DESIGN AND IMPLEMENT NEW SOFTWARE FEATURES AND FUNCTIONALITIES BASED ON LARGE SCALE DISTRIBUTED SYSTEMS. EMPLOY HANDS-ON CODING ABILITY AND SOFTWARE ARCHITECTURE TO FORECAST AND PROPOSE CHANGES OR IMPROVEMENTS TO PRODUCTS AND TECHNOLOGIES. COLLABORATE WITH OTHER ENGINEERING TEAMS, ENSURING INNOVATIVE WORK IS DELIVERED. DEVELOP AND DEPLOY SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS AND UTILIZE KNOWLEDGE OF AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (E.G., XP, SCRUM).

MINIMUM REQUIREMENTS:
MASTER’S DEGREE IN COMPUTER SCIENCE OR A RELATED FIELD OF STUDY AND 2 YEARS OF EXPERIENCE BUILDING TECHNOLOGICAL INFRASTRUCTURE AND SCALABLE SOLUTIONS USING MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), AND DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES, DEVELOPMENT AND DEPLOYMENT OF SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS, AND WORKING WITH AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (XP, SCRUM, ETC.).

ALTERNATE REQUIREMENTS:
BACHELOR’S DEGREE IN COMPUTER SCIENCE OR A RELATED FIELD OF STUDY AND 5 YEARS OF EXPERIENCE BUILDING TECHNOLOGICAL INFRASTRUCTURE AND SCALABLE SOLUTIONS USING MODERN PROGRAMMING LANGUAGES (JAVA, SCALA, PYTHON, AND GO), AND DATABASE TECHNOLOGIES (RDBMS, ORACLE AND/OR NOSQL CASSANDRA) AND SOFTWARE DEVELOPMENT METHODOLOGIES, DEVELOPMENT AND DEPLOYMENT OF SOFTWARE SERVICES IN A CLOUD INFRASTRUCTURE USING CONTINUOUS DELIVERY METHODS, AND WORKING WITH AGILE SOFTWARE DEVELOPMENT METHODOLOGIES (XP, SCRUM, ETC.).

SPECIAL REQUIREMENTS:
MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Fullstack Software Engineer, Confluence
 
javascript scala saas cloud aws frontend Jun 05
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for expert and hardworking Software Engineers to join Confluence team in our Mountain View, CA office. Our group has over 100 platform engineers building services that power the most critical parts of Atlassian’s experience. While we may be a big group, individuals on the newly formed platform adoption team can have a massive impact across the organization. We work strategically during post M&A activates to bridge user experiences across all our products through targeted platform adoption. This is a key part of Atlassian business model and is a high visibility role spanning multiple organizations.

On any given week you’ll be talking to engineers, product managers, designers and leaders across the company. If you are looking for an opportunity to not only tackle hard software integration problems but also hard company integration problem then this is the role for you. You’ll drive forward and own projects that can span 100 person teams of teams all while working with a hardworking team of engineers who have your back. You won’t always be measured by your code, but by the outcomes you can produce by bringing a diverse set of people together to achieve the best outcomes. Your thought leadership and solution architecture will sought after as people look to you for solutions to the hardest problems in the company.

On your first day, we'll expect you to have:

  • Experience in Scala NodeJS
  • Experience with React and other front end JavaScript frameworks
  • 3+ years of experience crafting and implementing high-performance RESTful micro-services serving millions of requests a day
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public or private cloud offerings (e.g., AWS, GCP, Azure)
  • Previously worked across multiple codebases when delivering features
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, with a team on short and long-running projects
  • Comprehensive understanding of microservices based architecture
  • A champion of practices like continuous delivery and infrastructure as code

It’s awesome, but not required if you have:

  • 6+ years of industry experience as a Software Engineer
  • Comprehensive knowledge about identity platforms, IDPaaS such as Auth0, Authentication, and Authorization
  • Experience working as a Solutions Architect or a background in consulting
  • Experience with large scale distributed systems and event-driven architectures
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Associate Solutions Architect
java python scala big data linux cloud Jun 03
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in North East (Remote) with 60-70% travel

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product features
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Backend Engineer
scala php java elasticsearch postgresql backend May 30

Ascendify is looking for a full time Backend Engineer to join our team. 
As a Backend Engineer you will work with in our Backend Engineering team to build new and maintain existing projects. You must be capable of working in a fast-paced, rapidly changing environment, be self-motivated, results-driven and detail-oriented to achieve success. 

A successful candidate can work remotely for this role but must be able to work during core Pacific Standard Time hours. They must be able to be productive working in a remote environment without direct supervision.  They must also be able to legally be able to work in the United States without the need for sponsorship.  Candidates outside the United States need not apply.

Responsibilities:

  • Write high-performance, reusable, modular code
  • They must write automated unit tests
  • Create new functions and features to improve the Ascendify product
  • Be able to write Technical Specs for new features including Database ERD Diagrams

Qualifications:

  • 5+ year of experience working with a scripting language; Python, PHP or Ruby
  • 3+ years in compiled language Scala, Java, etc
  • Experience working with an Object Oriented language
  • SQL experience
  • ElasticSearch experience
  • Extraordinary communication skills
  • Willing/able to learn (if needed, and) primarily use PHP and Scala 

Preferences:

  • B.S. in Computer Sciences or related discipline
  • Experience with Play Framework
  • DevOps experience is a plus
Share this job:
Senior Engineer
 
senior java python javascript scala saas May 12
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES: 

RESPONSIBLE FOR MAKING ATLASSIANS CLOUD SCALE USAGE OPEN IN THE ENTERPRISES BY BUILDING OUT OF THE ENTERPRISE GRADE SCALE ACCOMPANIED WITH GOVERNANCE, TRUST AND SECURITY. RESPONSIBLE FOR COLLABORATING WITH A TEAM OF PRODUCT MANAGERS, DESIGNERS AND ARCHITECTS TO BUILD ATLASSIANS APPLICATION LAYER SERVICES TO ENSURE THE ADMINISTRATION OF ATLASSIAN PRODUCTS AND PROCESSES ARE SIMPLE AT THE ENTERPRISE SCALE BASED ON FLUENCY IN ANY MODERN OBJECT-ORIENTED PROGRAMMING LANGUAGE INCLUDING BUT NOT LIMITED TO JAVA, JAVA SPRING FRAMEWORK, SCALA, PYTHON AND JAVASCRIPT DRIVE ATLASSIANS INNOVATIVE SOFTWARE PRODUCTS AND PROCESSES BY IDENTIFYING NEW WAYS TO SOLVE TECHNICAL ISSUES USING KNOWLEDGE OF DATABASE TECHNOLOGY E.G. RDBMS LIKE ORACLE OR POSTGRES AND/OR NOSQL LIKE DYNAMODB, MONGODB OR CASSANDRA AND KNOWLEDGE AND UNDERSTANDING OF SAAS, PAAS, LAAS INDUSTRY WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS INCLUDING BUT NOT LIMITED TO AWS, AZURE, GCP. RESPONSIBLE FOR MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS. USE KNOWLEDGE OF CLOUD ARCHITECTURE PATTERNS. IDENTIFY OPPORTUNITIES FOR IMPROVEMENT TO THE DESIGN, INTERFACE AND ARCHITECTURE OF ATLASSIANS SOFTWARE PRODUCTS ON THE CLOUD. COMMIT CHALLENGING CURRENT SOFTWARE TRENDS IN THE CLOUD DEVELOPMENT MARKET IN ORDER TO CREATE A SOLID EXPERIENCE ACROSS THE ATLASSIAN BRAND. MONITOR ALL PRODUCTION SYSTEMS IN THE AWS, REMEDIATE TECHNICAL ISSUES WHEN DISCOVERED AND MAINTAIN THREE-NINE AVAILABILITY FOR THE SERVICES INVOLVED. CRAFT AND IMPLEMENT HIGH-PERFORMANCE RESTFUL MICRO-SERVICES THAT SERVE MILLIONS OF REQUESTS PER DAY.

MINIMUM REQUIREMENTS: 

BACHELORS DEGREE IN COMPUTER SCIENCE, INFORMATION SYSTEMS OR A CLOSELY RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF EXPERIENCE AS A SOFTWARE DEVELOPER WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS (AWS, AZURE, GCP), RELATIONAL DATABASES SUCH AS POSTGRES, JAVA SPRING FRAMEWORK, NOSQL SUCH AS DYNAMODB ORMONGODB, MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS. ALTERNATE REQUIREMENTS: MASTERS DEGREE IN COMPUTER SCIENCE, INFORMATION SYSTEMS OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE AS A SOFTWARE DEVELOPER WITH HANDS ON EXPERIENCE OF PUBLIC CLOUD OFFERINGS (AWS, AZURE, GCP),RELATIONAL DATABASES SUCH AS POSTGRES, JAVA SPRING FRAMEWORK, NOSQL SUCH AS DYNAMODB OR MONGODB, MICROSERVICES OR DISTRIBUTED SYSTEMS AND MONITORING AND MAINTAINING PRODUCTION SYSTEMS.

SPECIAL REQUIREMENTS: MUST PASS TECHNICAL INTERVIEW.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Cloud Scala Software Developer
scala playframework cloud java javascript docker May 08

Cloud Scala Software Developer (Remote United States)

At Railroad19, we develop customized software solutions and provide software development services.  We are currently seeking a Scala Software Developer that is fluent in Scala and web applications.  The successful engineer will be a technical resource for the development of clean and maintainable code. In addition to contributing code and tangible deliverables the role is expected to work as an adviser to help identify, educate, and foster best-in-class solutions. Creating these relationships requires strong communication skills.


At Railroad19, you are part of a company that values your work and gives you the tools you need to succeed. We are headquartered in Saratoga Springs, New York, but we are a distributed team of remote developers across the US. 

This is a full-time role with vacation, full benefits and 401k.  Railroad19 provides competitive compensation with excellent benefits and a great corporate culture.


The role is remote - U.S. located, only full time (NO- contractors, Corp-to-Corp or 1099).  

Core responsibilities:

  • Understand our client's fast-moving business requirements
  • Negotiate appropriate solutions with multiple stakeholders
  • Write and maintain scalable enterprise quality software
  • Develop new applications and production application support
  • Participate in detailed technical design, development, and implementation of applications using existing and emerging technology platforms.
  • Manage the complete software development life cycle
  • Writing functional and unit tests in order to maintain code quality
  • Develop understanding of client business processes, objectives, and solution requirements.
  • Participate in project work groups with subject matter experts and stakeholders to understand specific needs
  • Collaborate with other teams in order to deliver a highly performance application that contains little or no defects
  • Identify new opportunities, tools, and services to enhance the custom software platform
  • Support and troubleshoot issues (process & system), identify root cause, and proactively recommend sustainable corrective actions

Skills & Experience:

  • Advanced Scala development-based software solutions
  • Extensive enterprise experience in web applications
  • Enterprise experience with relational and nonrelational database
  • Hands on experience with Azure and/or Google cloud, Docker or Container Orchestration (Kubernetes) is a plus
  • Hands on experience with Postgres, MySQL, or Redis technologies is a plus
  • Hands on experience with Play framework
  • Hands on experience with Java 8 a plus
  • Hands on experience with NoSQL technologies
  • Familiar React and/or similar JavaScript frameworks is a plus
  • Demonstrates willingness to learn new technologies and takes pride in delivering working software
  • Excellent oral and written communication skills, analytical, and problem-solving skills
  • Experience participating on an agile team
  • Is self-directed and can effectively contribute with little supervision
  • Experience in Banking/Finance fields a plus
  • Bachelor's or master's degree in computer science, computer engineering, or other technical discipline; or equivalent work experience
Share this job:
Senior Backend Developer
komoot  
aws java scala kotlin backend senior May 06

Millions of people experience real-life adventures with our apps. We help people all over the world discover the best hiking and biking routes, empowering our users to explore more of the great outdoors. And we’re good at it: Google and Apple have listed us as one of their Apps of the Year numerous times—and, with more than 10 million users and 100,000 five-star reviews - komoot is on its way to become one of the most popular cycling and hiking platforms.
Join our fully remote team of 60+ people and change the way people explore!

As komoot’s next backend engineer, you join a highly-motivated team of tech enthusiasts. We are focused on impact, that’s why we love to find simple and smart solutions to complex problems, and embrace modern technologies to face our tough challenges.
Join us if you live and love infrastructure as code, automating workflows, x10 scaling challenges and building resilient, self-healing micro-services.

Why you will love it

  • You’ll work on a global product that inspires millions of users to enjoy the great outdoors
  • Positively impact millions of users directly with your onboarding project
  • Due to the nature of our data and our scale, you will face interesting challenges that take innovative, non-standard solutions
  • We believe good ideas count more than titles
  • You’ll take ownership over your projects from day one
  • Small overhead: you will work in a small and effective cross-functional team
  • You’ll work together with enthusiastic engineers, hikers and cyclists.
  • We let you work from wherever you want, be it a beach, the mountains, your house, co - working location of your choice, our HQ in Potsdam or anywhere else that lies in any time zone situated between UTC-1 and UTC+3
  • You’ll travel with our team to amazing outdoor places several times a year to exchange ideas, learnings and go for hikes and rides. Check out this video to find out more about our team.

What you will do

  • Implement new product features closely with client developers, designers, copy writers, quality assurance, data scientists and product managers
  • Keep our system state-of-the-art and resilient for our fast growing traffic
  • Develop end-to-end solutions including concept, road map planning, implementation, testing, deployment and monitoring
  • Build new micro-services with Kotlin and AWS, and improve existing ones
  • Work on high-traffic online services (like REST APIs) and offline workers for data crunching

You will be successful in this position if you

  • Are highly self-driven, responsible and keen to learn and improve
  • Have 3+ years of professional experience in developing distributed and resilient web applications
  • Have 3+ years of professional experience with Kotlin, Java or Scala
  • Have 3+ years of professional experience with AWS, Google Cloud or Microsoft Azure
  • Have experience with Infrastructure as Code, continuous integration & deployment and monitoring
  • Enjoy paying attention to details and care about solid solutions
  • Are a great communicator in a diverse team

Sounds like you?

Then send us the following

  • Your CV
  • A write-up explaining who you are and why you are interested in working at komoot
  • Examples of your work (e.g. GitHub, PDFs, Slideshare, etc.)
  • Feel free to send us something that shows us a little more about what you’re interested in, be it your Twitter/Instagram account, or your OpenStreetMap profile if you have one
Share this job:
Senior Fullstack Engineer, Confluence
 
senior javascript scala saas cloud aws May 05
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for expert and hardworking Software Engineers to join Confluence team in our Mountain View, CA office. Our group has over 100 platform engineers building services that power the most critical parts of Atlassian’s experience. While we may be a big group, individuals on the newly formed platform adoption team can have a massive impact across the organization. We work strategically during post M&A activates to bridge user experiences across all our products through targeted platform adoption. This is a key part of Atlassian business model and is a high visibility role spanning multiple organizations.

On any given week you’ll be talking to engineers, product managers, designers and leaders across the company. If you are looking for an opportunity to not only tackle hard software integration problems but also hard company integration problem then this is the role for you. You’ll drive forward and own projects that can span 100 person teams of teams all while working with a hardworking team of engineers who have your back. You won’t always be measured by your code, but by the outcomes you can produce by bringing a diverse set of people together to achieve the best outcomes. Your thought leadership and solution architecture will sought after as people look to you for solutions to the hardest problems in the company.

On your first day, we'll expect you to have:

  • Experience in Scala NodeJS
  • Experience with React and other front end JavaScript frameworks
  • 3+ years of experience crafting and implementing high-performance RESTful micro-services serving millions of requests a day
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public or private cloud offerings (e.g., AWS, GCP, Azure)
  • Previously worked across multiple codebases when delivering features
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, with a team on short and long-running projects
  • Comprehensive understanding of microservices based architecture
  • A champion of practices like continuous delivery and infrastructure as code

It’s awesome, but not required if you have:

  • 6+ years of industry experience as a Software Engineer
  • Comprehensive knowledge about identity platforms, IDPaaS such as Auth0, Authentication, and Authorization
  • Experience working as a Solutions Architect or a background in consulting
  • Experience with large scale distributed systems and event-driven architectures
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Data Engineer
 
java python scala big data aws May 04
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.

On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.

You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Presto, Spark, Airflow and Hive. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.

More about you
As a data engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.

On your first day, we'll expect you to have:

  • At least 3 years professional experience as a software engineer or data engineer
  • A BS in Computer Science or equivalent experience
  • Strong programming skills (some combination of Python, Java, and Scala preferred)
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience writing SQL, structuring data, and data storage practices
  • Experienced building data pipelines and micro services
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

It's preferred, but not technically required, that you have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • A passion for building and running continuous integration pipelines.
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide.

It’s the data engineering teams job to make more Atlassian’s data driven and facilitate growth. We do this by providing metrics and other data elements which are reliable and trustworthy, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team with a brand new mission, expanding into a new office. There will be plenty of challenges and scope to grow. We work very closely with Sales, Marketing and Commerce teams. We value when people ask hard questions and challenge each other to constantly improve our work. We are independent but love highly collaborative team environments, so you'll get the opportunity to work with lots of other awesome people just like you. We're all about enabling teams to execute growth and customer retention strategies by providing the right data fabrics and tools.

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Backend Software Engineer, Identity Platform
 
backend java scala saas cloud aws Apr 14
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a talented backend software engineer to build the next generation Identity Platform.

Over the last two years, Identity team has completely rebuilt their infrastructure around a micro services architecture with highly scalable services utilizing AWS resources. Aside from maintaining and growing user management features, Identity is responsible for operating its infrastructure reliably at a massive and constantly growing scale.

All products and services at Atlassian integrate with the Identity Platform, so you will collaborate with other Developer teams, Product Managers, Quality Engineers, and Support Engineers to ship an Identity experience that our users love. You will directly impact our customers' experience through the design and implementation of new features and functionalities. You will be a part of a small and high-energy team delivering improvements for our Identity infrastructure that powers all of our Cloud products.

On your first day, we'll expect you to have:

  • 4+ years of relevant industry experience
  • Specialization in Java and Spring Framework
  • Proven understanding of micro-services oriented architecture and extensible REST APIs
  • Experience with AWS cloud infrastructure
  • Fluency in any one database technology (e.g. RDBMS like Oracle or Postgres and/or NoSQL like DynamoDB or Cassandra)

It’s awesome, but not required if you have experience working with:

  • Knowledge of the principles to construct fault-tolerance, reliability and durability within software systems
  • Knowledge to evaluate trade-offs between correctness, robustness, performance, space and time
  • Experience in taking ownership of features, while working with a team on short and long-running projects
  • Auth2, OpenID Connect, SAML protocols and encryption technologies
  • Relational databases, such as MySQL and PostgreSQL
  • Large scale distributed systems and event-driven architectures
  • Understanding of SaaS, PaaS, IaaS industry with hands on experience with public cloud offerings (e.g., AWS, GAE, Azure)
  • Familiarity with other programming languages and frameworks, such as Node.js, Scala, and Go
  • Practical knowledge of agile software development methodologies (e.g., XP, scrum)
  • Experience with continuous delivery and infrastructure as code
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Cloud Software Engineer
cloud senior golang java python scala Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role:

The Sr. Software Engineer role is part of the Engineering team from CrowdStrike Romania who will build globally distributed, fault tolerant and highly scalable cloud-based critical systems using Golang.

Don't worry if you don't know Golang, we will teach you!

If you are a hands-on engineer who loves to operate at scale, let's talk!

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Lead backend engineering efforts from rapid prototypes to large-scale application services across CrowdStrike products
  • Make it possible for internal teams to easily work with data at the petabyte scale
  • Leverage and build cloud based services to support our top rated security intelligence platform
  • Work with security researchers to troubleshoot time-sensitive production issues
  • Keep petabytes of critical business data safe, secure, and available
  • Brainstorm, define, and build collaboratively with members across multiple teams
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team
  • Be mentored and mentor other developers on web, backend and data storage technologies and our system
  • Constantly re-evaluate our product to improve architecture, knowledge models, user experience, performance and stability
  • Be an energetic ‘self-starter’ with the ability to take ownership and be accountable for deliverables
  • Use and give back to the open source community

You'll use:

  • Golang
  • Python
  • Cassandra
  • Kafka
  • Elasticsearch
  • SQL
  • Redis
  • ZMQ
  • Hadoop
  • AWS Cloud
  • Git

What You’ll Need:

  • Bachelor's Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems)
  • Strong programming skills – Python / Java / Scala or Golang
  • The ability to design scalable and re-usable SOA services
  • The ability to scale backend systems – sharding, partitioning, scaling horizontally are second nature to you
  • The desire to ship code and the love of seeing your bits run in production
  • Deep understanding of distributed systems and scalability challenges
  • Deep understand multi-threading, concurrency, and parallel processing technologies
  • Team player skills – we embrace collaborating as a team as much as possible
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration

Bonus Points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging)
  • Existing exposure to Golang, Scala, AWS, Cassandra, Kafka, Redis, Splunk
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

Bring your experience in distributed technologies and algorithms, your great API and systems design sensibilities, and your passion for writing code that performs at extreme scale. You will help build a platform that scales to millions of events per second and Terabytes of data per day. If you want a job that makes a difference in the world and operates at high scale, you’ve come to the right place.

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Site Reliability Engineer
golang scala machine learning cloud aws testing Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

At CrowdStrike we operate a massive cloud platform that protects our customers from a variety of bad actors: cyber criminals, hacktivists and state sponsored attackers. We process tens of billions of events a day and we store and use petabytes of data. We’re looking for an engineer who is passionate about site reliability and is excited about joining us to ensure our service runs 24/7.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote).

You will:

  • Be responsible for all operational aspects of our platform - Availability, Latency, Throughput, Monitoring, Issue Response (analysis, remediation, deployment) and Capacity Planning with respect to Latency and Throughput. Build tooling to help monitor and analyze the platform
  • Work in a team of highly motivated engineers
  • Use your passion for technology to ensure our platform operates flawlessly 24x7
  • Obsess about learning, and champion the newest technologies & tricks with others, raising the technical IQ of the team. We don’t expect you to know all the technology we use but you will be able to get up to speed on new technology quickly
  • Have broad exposure to our entire architecture and become one of our experts in overall process flow
  • Be a great code reader and debugger, you will have to dive into large code bases, identify issues and remediate
  • Have an intrinsic drive to make things better
  • Bias towards small development projects and the occasional larger project
  • Use and give back to the open source community

You'll use:

  • Go(Golang)
  • Python
  • ElasticSearch
  • Cassandra
  • Kafka
  • Redis, Memcached
  • AWS Cloud

Key Qualifications:

You have:

  • Degree in Computer Science (or commensurate experience in data structures/algorithms/distributed systems).
  • Experience as a sustaining engineering or SRE for a cloud based product.
  • Good understanding of distributed systems and scalability challenges – sharding, partitioning, scaling horizontally are second nature to you.
  • A thorough understanding of engineering best practices from appropriate testing paradigms to effective peer code reviews and resilient architecture.
  • The ability to thrive in a fast paced, test-driven, collaborative and iterative programming environment.
  • Good understand of multi-threading, concurrency, and parallel processing technologies.
  • The skills to meet your commitments on time and produce high quality software that is unit tested, code reviewed, and checked in regularly for continuous integration.
  • Team player skills – we embrace collaborating as a team as much as possible.

Bonus points awarded for:

  • Contributions to the open source community (GitHub, Stack Overflow, blogging).
  • Existing exposure to Go, Kafka, AWS, Cassandra, Elasticsearch, Scala, Hadoop, Spark
  • Prior experience in the Cyber Security or intelligence fields

Benefits of working at CrowdStrike:

  • Background or familiarity with File Integrity Monitoring (FIM), Cloud Security Posture Management (CSPM), or Vulnerability Management.
  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service event

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Federal Solutions Architect - Secret Clearance
java python scala big data linux cloud Apr 06
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 50% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • TS/SCI clearance required

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Scala/Kubernetes Engineer
Luna  
kubernetes aws terraform scala senior saas Apr 04

Overview

Luna is looking for a senior cloud software engineer to take charge of the design, development, and evolution of the new SaaS offering for Luna, a project said by Singularity University to have the potential to change the lives of one-billion people. If you bring strong technical skills and have a passion for collaboration, this role could be for you.

As a senior cloud software engineer, you'll be leading the effort to design and develop our new SaaS offering, providing a web-based version of Luna to our clients. Your work will be integral to the next phase of Luna's development, as we expand our offering beyond the open-source project. You'll be able to work with a world-class team of skilled engineers, community managers, and business developers (from Bloomberg, GitHub and PayPal to name a few), and put your indelible stamp on Luna's future.

What You'll Do

As a senior cloud software engineer you'll be in charge of building the SaaS offering for Luna, hosting both the language and its IDE in the cloud. This will involve:

  • Working closely with the internal teams to design a secure and scalable SaaS architecture.
  • Developing a SaaS solution based upon that design with robust tooling and reliability, as well as inbuilt support for collaboration.
  • Hosting the architecture on a cloud provider without becoming too dependent on any one given platform.
  • Contributing to the evolution of this vibrant open-source project by bringing a new component to its ecosystem and product offering.

The Skills We're Looking For

We have a few particular skills that we're looking for in this role:

  • 3+ years experience in designing secure, scalable, and collaboration-ready SaaS architectures.
  • A strong commitment to security and scalability that permeates your approach to design.
  • Experience with Kubernetes deployment and administration using EKS.
  • Experience with Scala and Akka.
  • Practical knowledge about AWS networking and storage architectures, and how they integrate with Kubernetes.
  • Experience managing AWS resources using Terraform.
  • Experience working in an SRE capacity on monitoring, incident handling and continuous service improvement.
  • Experience building and delivering CI/CD pipelines to ensure service stability and reliability.
  • Experience employing Devops practices such as the 'continuous everything' and 'everything as code' styles of work.
  • Experience working with Git, and preferably GitOps.

It would be a big bonus if you also had:

  • Skills working with Azure and GCP to help expand beyond AWS in the future.
  • Experience working in close conjunction with multiple product teams to ensure that the solutions you provide meet their needs.
Share this job:
Senior Software Engineer, Backend
Numbrs  
java backend microservices kubernetes machine-learning senior Mar 25

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

All candidates will have

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • experience with high volume production grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Full Stack Engineer - DSS
Dataiku  
full stack java python javascript scala big data Mar 13
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



As a full stack developer in the Dataiku engineering team, you will play a crucial role in helping us have a real impact on the daily life of data analysts and scientists. You will be joining one of 3 teams that develop new features and improve existing parts of  Data Science Studio (DSS) based on user feedback.

DSS is an on-premises application that connects together all big data technologies. We work with SQL databases, Spark, Kubernetes, Hadoop, Elasticsearch, MLlib, scikit-learn, Shiny, … and many more. Basically, our technological stack is made of all the technologies present in Technoslavia!

Our backend is mainly written in Java but also includes large chunks in Scala, Python and R. Our frontend is based on Angular and also makes vast usage of d3.js.

One of the most unique characteristics of DSS is the breadth of its scope and the fact that it caters both to data analysts (with visual and easy to use analytics) and data scientists (with deep integration in code and libraries, and a web-based IDE).

This is a full-time position, based in France either in our Paris office or remote.

Your missions

  • Turn ideas or simplistic specifications into full-fledged product features, including unit and end-to-end tests.
  • Tackle complex problems that range from performance and scalability to usability, so that complicated machineries look straightforward and simple to use for our users.
  • Help your coworkers: review code, spread your technical expertise, improve our tool chain
  • Bring your energy to the team!

You are the ideal recruit if

  • You are mastering a programming language (Java, C#, Python, Javascript, You-name-it, ...).
  • You know that low-level Java code and slick web applications in Javascript are two sides of the same coin and are eager to use both.
  • You know that ACID is not a chemistry term.
  • You have a first experience (either professional or personal) building a real product or working with big data or cloud technologies.

Hiring process

  • Initial call with the talent acquisition manager
  • On-site meeting (or video call) with the hiring manager
  • Home test to show your skills
  • Final on-site interviews


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Machine Learning Engineer or Data Scientist
python machine-learning nlp artificial-intelligence machine learning scala Feb 22

Builders and Fixers Wanted!

Company Description:  

Ephesoft is the leader in Context Driven Productivity solutions, helping organizations maximize productivity and fuel their journey towards the autonomous enterprise through contextual content acquisition, process enrichment and amplifying the value of enterprise data. The Ephesoft Semantik Platform turns flat data into context-rich information to fuel data scientists, business users and customers with meaningful data to automate and amplify their business processes. Thousands of customers worldwide employ Ephesoft’s platform to accelerate nearly any process and drive high value from their content. Ephesoft is headquartered in Irvine, Calif., with regional offices throughout the US, EMEA and Asia Pacific. To learn more, visit ephesoft.com.

Ready to invent the future? Ephesoft is immediately hiring a talented, driven Machine Learning Engineer or Data Scientist to play a key role in developing a high-profile AI platform in use by organizations around the world. The ideal candidate will have experience in developing scalable machine learning products for different contexts such as object detection, information retrieval, image recognition, and/or natural language processing.

In this role you will:

  • Develop and deliver CV and NLP systems to bring structure and understanding to unstructured documents.
  • Innovate by designing novel solutions to emerging and extant problems within the domain of  invoice processing.
  • Be part of a team of Data Scientists, Semantic Architects, and Software Developers responsible for developing AI, ML, and Cognitive Technologies while building a pipeline to continuously deliver new capabilities and value. 
  • Implement creative data-acquisition and labeling solutions that will form the foundations of new supervised ML models.
  • Communicate effectively with stakeholders to convey technical vision for the AI capabilities in our solutions. 

 You will bring to this role:

  • Love for solving problems and working in a small, agile environment.
  • Hunger for learning new skills and sharing your findings with others.
  • Solid understanding of good research principles and experimental design.
  • Passion for developing and improving CV/AI components--not just grabbing something off the shelf.
  • Excitement about developing state-of-the-art, ground-breaking technologies and owning them from imagination to production.

Qualifications:

  • 3+ years of experience developing and building AI/ML driven solutions
  • Development experience in at least one object-oriented programming language  (Java, Scala, C++) with preference given to Python experience
  • Demonstrated skills with ML, CV and NLP libraries/frameworks such as NLTK, spaCy, Scikit-Learn, OpenCV, Scikit-Image
  • Strong experience with deep learning libraries/frameworks like TensorFlow, PyTorch, or Keras
  • Proven background of designing and training machine learning models to solve real-world business problems

EEO Statement:

Ephesoft embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be.

Share this job:
Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Senior Data Engineer
Acast  
senior java scala big data docker cloud Feb 10
Acast is the world leading technology platform for on-demand audio and podcasting with offices in Stockholm, London, New York, Los Angeles, Sydney, Paris, Oslo and Berlin. We have over 150M monthly listens today, and are growing rapidly. At our core is a love of audio and the fascinating stories our podcasters tell.

We are a flat organization that supports a culture of autonomy and respect, and find those with an entrepreneurial spirit and curious mindset thrive at Acast. 

We are looking for a Senior Data Engineer to join a new purpose driven team that will create data driven products to help other teams provide smarter solutions to our end customers as well as core dataset for business critical use-cases such as payouts to our podcasters. This team’s ambition is to transform our data into insights. The products you build will be used by our mobile apps, the product suite we have for podcast creators and advertisers as well as by other departments within Acast. 

In this role you will work with other engineers, product owners within a cross functional agile team.

You

  • 3+ years of experience of building robust big data ETL pipelines within Hadoop Ecosystem: Spark, Hive, Presto, etc
  • Are proficient in Java or Scala and Python
  • Experience with AWS cloud environment: EMR, Glue, Kinesis, Athena, DynamoDB, Lambda, Redshift, etc.
  • Have strong knowledge in SQL, NoSQL database design and modelling, and knowing the differences on modern big data systems and traditional data warehousing
  • DevOps and infrastructure as code experience (a plus), familiar with tools like Jenkins, Ansible, Docker, Kubernetes, Cloudformation, Terraform etc
  • Advocate agile software development practices and balance trade-offs in time, scope and quality
  • Are curious and a fast learner who can adapt quickly and enjoy a dynamic and ever-changing environment

Benefits

  • Monthly wellness allowance
  • 30 days holiday
  • Flexible working
  • Pension scheme
  • Private medical insurance
Our engineering team is mostly located in central Stockholm, but with a remote first culture we’re able to bring on people who prefer full time remote work from Sweden, Norway, UK, France and Germany.

Do you want to be part of our ongoing journey? Apply now!

Share this job:
Solutions Architect - Pacific Northwest
java python scala big data linux cloud Feb 07
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 60 -70% travel expected.
Anywhere in Pacific NorthWest

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Software Engineer at Jack Henry & Associates, Inc.
scala fs2 http4s microservices distributed-system senior Feb 05

At Banno, we believe that the world is a better place when community banks and credit unions exist to serve their communities. Our mission is to build the technology that gives community financial institutions the tools they need to compete against the big banks. Banno is redefining the relationship between forward-thinking financial institutions and their customers.


About You

You are infinitely curious and thrive in an environment where you are constantly learning and growing. You want to be somewhere that you are trusted and set up for success.  You want to be surrounded by other great engineers that drive you to be better every day.

Although you work in a team, you are self-motivated and able to work independently. You want to own the deliverable from start to finish by working with the product manager, defining the scope and seeing the work all the way through to deployment in production. You care deeply about your work, your team, and the end user.

Banno values trust and those with a bias towards action.  We are confident you will love it here.


What you and your team are working on

As a Senior Scala Engineer, you work with your team to provide APIs and back end services for a suite of digital banking products, including native mobile and web applications. Our APIs are first-class citizens and are consumed by both our internal teams as well as teams outside of Banno.

You are keeping our services up-to-date with the newest development and deployment practices. You are responsible for maintaining our services in a microservices environment and for implementing the tools necessary for observability and monitoring of those services.

This position can be worked 100% REMOTE from any US location.


Minimum Qualifications

  • Minimum 6 years of experience with server-side programming languages in production.

Preferred Qualifications

  • Knowledge of or experience with microservice architecture.
  • Experience with functional programming languages. 
  • Experience with the Scala libraries cats, http4s, and doobie.
  • Experience with event driven architecture using Kafka.
  • Experience with Observability and Monitoring.
Share this job:
Data Science Engineer
data science java python scala big data cloud Feb 05
Contrast Security is the world’s leading provider of security technology that enables software applications to protect themselves against cyber attacks. Contrast's patented deep security instrumentation is the breakthrough technology that enables highly accurate analysis and always-on protection of an entire application portfolio, without disruptive scanning or expensive security experts. Only Contrast has intelligent agents that work actively inside applications to prevent data breaches, defeat hackers and secure the entire enterprise from development, to operations, to production.

Our Application Security Research (Contrast Labs) team is hyper-focused on continuous vulnerability and threat research affecting the world's software ecosystem. As a Data Science Engineer as part of the Research team, he or she will be responsible for expanding and optimizing data from our real-time security intelligence platform, as well as optimizing data flow and collection for cross functional teams.

The Data Science Engineer will support our research team, software developers, database architects, marketing associates, product team, and other areas of the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products. The right candidate will be excited by the prospect of optimizing or even re-designing our company's data architecture to support our next generation of products and data initiatives. It will present an opportunity as a data scientist to also contribute original research through data correlation.

The Data Science Engineer is responsible for supporting and contributing to Contrast’s growing and enhancing original security research efforts relevant to the development communities associated with Contrast Assess, Protect, and OSS platforms. Original research will be published in company blogs, papers and presentations.

If you're amazing but missing some of these, email us your résumé and cover letter anyway. Please include a link to your Github or BitBucket account, as well as any links to some of your projects if available.

Responsibilities

  • Conduct basic and applied research on important and challenging problems in data science as it relates to the problems Contrast is trying to solve.
  • Assemble large, complex data sets that meet functional / non-functional business requirements. 
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into threats, vulnerabilities, customer usage, operational efficiency and other key business performance metrics.
  • Help define and drive data-driven research projects, either on your own or in collaboration with others on the team.
  • Engage with Contrast’s product teams and customers to promote and seek out new data science research initiatives.
  • Create data tools for analytics and research team members that assist them in building and optimizing our product into an innovative industry leader.
  • Advanced working Structured Query Language (SQL) knowledge and experience working with relational databases, query authoring as well as working familiarity with a variety of databases.
  • Development and presentation of content associated with the research through conference speaking and/or blogging.

About You

  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Strong analytic skills related to working with unstructured datasets. 
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • They should also have experience using some of the following software/tools:
  • Big data tools: Hadoop, Spark, Kafka, etc.
  • Relational SQL and NoSQL databases, including MongoDB and MySQL.
  • Data pipeline and workflow management tools: Azkaban, Luigi, Airflow, etc.
  • AWS cloud services: EC2, EMR, RDS, Redshift
  • Stream-processing systems: Storm, Spark-Streaming, etc.
  • Object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.\#LI
  • 5+ years of experience in a Data Science role
  • Strong project management and organizational skills.
  • Nice to have understanding of the OWASP Top 10 and SANS/CWE Top 25.
  • You ask questions, let others know when you need help, and tell others what you need.
  • Attained a minimum Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. 

What We Offer

  • Competitive compensation
  • Daily team lunches (in office)
  • Meaningful stock options
  • Medical, dental, and vision benefits
  • Flexible paid time off 
By submitting your application, you are providing Personally Identifiable Information about yourself (cover letter, resume, references, or other employment-related information) and hereby give your consent for Contrast Security, and/ or our HR-related Service Providers, to use this information for the purpose of processing, evaluating and responding to your application for current and future career opportunities. Contrast Security is an equal opportunity employer and our team is comprised of individuals from many diverse backgrounds, lifestyles and locations. 

The California Consumer Privacy Act of 2018 (“CCPA”) will go into effect on January 1, 2020. Under CCPA, businesses must be overtly transparent about the personal information they collect, use, and store on California residents. CCPA also gives employees, applicants, independent contractors, emergency contacts and dependents (“CA Employee”) new rights to privacy.

In connection with your role here at Contrast, we collect information that identifies, reasonably relates to, or describes you (“Personal Information”). The categories of Personal Information that we collect, use or store include your name, government-issued identification number(s), email address, mailing address, emergency contact information, employment history, educational history, criminal record, demographic information, and other electronic network activity information by way of mobile device management on your Contrast-issued equipment. We collect and use those categories of Personal Information (the majority of which is provided by you) about you for human resources and other business-driven purposes, including evaluating your performance here at Contrast, evaluating you as a candidate for promotion within Contrast, managing compensation (including payroll and benefits), record keeping in relation to recruiting and hiring, conducting background checks as permitted by law, and ensuring compliance with applicable legal requirements for Contrast. We collect, use and store the minimal amount of information possible

We also collect Personal Information in connection with your application for benefits. In addition to the above, Personal Information also identifies those on behalf of whom you apply for benefits. During your application for benefits, the categories of Personal Information that we collect include name, government-issued identification number(s), email address, mailing address, emergency contact information, and demographic information. We collect and use those categories of Personal Information for administering the benefits for which you are applying and ensuring compliance with applicable legal requirements and Contrast policies.
As a California resident, you are entitled to certain rights under CCPA:

-You have the right to know what personal information we have collected from you as a California employee;
-You have the right to know what personal information is sold or disclosed and to whom. That said, we do not sell your information, We do, however, disclose information to third parties in connection with the management of payroll, employee benefits, etc. to fulfill our obligations to you as an employee of Contrast. each of those third parties have been served with a Notice to Comply with CCPA or have entered into a CCPA Addendum with Contrast which includes them from selling your information.
-You have the right to opt-out of the sale of your personal information. Again, we do not sell it but you might want to be aware of that as a "consumer" in California with respect to other businesses' and
-The right to be free from retaliation for exercising any rights

If you have any questions, please let us know!
Share this job:
Senior Data Engineer
Medium  
senior java python scala aws frontend Jan 29
At Medium, words matter. We are building the best place for reading and writing on the internet—a place where today’s smartest writers, thinkers, experts, and storytellers can share big, interesting ideas; a place where ideas are judged on the value they provide to readers, not the fleeting attention they can attract for advertisers.

We are looking for a Senior Data Engineer that will help build, maintain, and scale our business critical Data Platform. In this role, you will help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time. You'll also lead development of both transactional and data warehouse designs, mentoring our team of cross functional engineers and Data Scientists.

At Medium, we are proud of our product, our team, and our culture. Medium’s website and mobile apps are accessed by millions of users every day. Our mission is to move thinking forward by providing a place where individuals, along with publishers, can share stories and their perspectives. Behind this beautifully-crafted platform is our engineering team who works seamlessly together. From frontend to API, from data collection to product science, Medium engineers work multi-functionally with open communication and feedback

What Will You Do!

  • Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business.
  • Drive the evolution of Medium's data platform to support near real-time data processing and new event sources, and to scale with our fast-growing business.
  • Help define the team strategy and technical direction, advocate for best practices, investigate new technologies, and mentor other engineers.
  • Design, architect, and support new and existing ETL pipelines, and recommend improvements and modifications.
  • Be responsible for ingesting data into our data warehouse and providing frameworks and services for operating on that data including the use of Spark.
  • Analyze, debug and maintain critical data pipelines.
  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Spark and AWS technologies.

Who You Are!

  • You have 7+ years of software engineering experience.
  • You have 3+ years of experience writing and optimizing complex SQL and ETL processes, preferably in connection with Hadoop or Spark.
  • You have outstanding coding and design skills, particularly in Java/Scala and Python.
  • You have helped define the architecture, tooling, and strategy for a large-scale data processing system.
  • You have hands-on experience with AWS and services like EC2, SQS, SNS, RDS, Cache etc or equivalent technologies.
  • You have a BS in Computer Science / Software Engineering or equivalent experience.
  • You have knowledge of Apache Spark, Spark streaming, Kafka, Scala, Python, and similar technology stacks.
  • You have a strong understanding & usage of algorithms and data structures.

Nice To Have!

  • Snowflake knowledge and experience
  • Looker knowledge and experience
  • Dimensional modeling skills
At Medium, we foster an inclusive, supportive, fun yet challenging team environment. We value having a team that is made up of a diverse set of backgrounds and respect the healthy expression of diverse opinions. We embrace experimentation and the examination of all kinds of ideas through reasoning and testing. Come join us as we continue to change the world of digital media. Medium is an equal opportunity employer.

Interested? We'd love to hear from you.
Share this job:
Consulting Engineer
java python scala big data linux azure Jan 17
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Consulting Engineers drive customer success by helping them realize business value from the burgeoning flow of real-time data streams in their organizations. In this role you’ll interact directly with our customers to provide software, development and operations expertise, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases.  

Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art streaming data infrastructure alongside colleagues who are widely recognized as leaders in this space.

Promoting Confluent and our amazing team to the community and wider public audience is something we invite all our employees to take part in.  This can be in the form of writing blog posts, speaking at meetups and well known industry events about use cases and best practices, or as simple as releasing code.

While Confluent is headquartered in Palo Alto, you can work remotely from any location on the East Coast of the United States as long as you are able to travel to client engagements as needed

A typical week at Confluent in this role may involve:

  • Preparing for an upcoming engagement, discussing the goals and expectations with the customer and preparing an agenda
  • Researching best practices or components required for the engagement
  • Delivering an engagement on-site, working with the customer’s architects and developers in a workshop environment
  • Producing and delivering the post-engagement report to the customer
  • Developing applications on Confluent Kafka Platform
  • Deploy, augment, upgrade Kafka clusters
  • Building tooling for another team and the wider company
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles
  • Honing your skills, building applications, or trying out new product features

Required Skills:

  • Deep experience building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field
  • Ability to travel up to 60-75% of your time to client engagements

Nice to have:

  • Experience using Amazon Web Services, Azure, and/or GCP for running high-throughput systems
  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Python, Scala, or Go
  • Experience with configuration and management tools such as Ansible, Teraform, Puppet, Chef
  • Experience writing to network-based APIs (preferably REST/JSON or XML/SOAP)
  • Knowledge of enterprise security practices and solutions, such as LDAP and/or Kerberos
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Data Scientist
r machine-learning python apache-spark cluster-analysis senior Jan 08

In the Senior Data Scientist role, you will have full ownership over the projects you tackle, contribute to solving a wide range of machine learning applications, and find opportunities where data can improve our platform and company. We are looking for an experienced and creative self-starter who executes well and can exhibit exceptional technical know-how and strong business sense to join our team. 


WHAT YOU'LL DO:

  • Mine and analyze data from company data stores to drive optimization and improvement of product development, marketing techniques and business strategies
  • Assess the effectiveness and accuracy of data sources and data gathering techniques
  • Develop and implement data cleansing and processing to evaluate and optimize data quality
  • Develop custom data models and algorithms to apply to data sets
  • Run complex SQL queries and existing automations to correlate disparate data to identify questions and pull critical information
  • Apply statistical analysis and machine learning to uncover new insights and predictive models for our clients
  • Develop company A/B testing framework and test model quality
  • Collaborate with data engineering and ETL teams to deploy models / algorithms in production environment for operations use
  • Develop processes and tools to monitor and analyze model performance and data accuracy
  • Ad-hoc analysis and present results in a clear manner
  • Create visualizations and storytelling
  • Communicate Statistical Analysis and Machine Learning Models to Executives and Clients
  • Create and manage APIs

WHO YOU ARE:

  • 3-5+ years of relevant work experience
  • Extensive knowledge of Python and R
  • Clear understanding of various analytical functions (median, rank, etc.) and how to use them on data sets
  • Expertise in mathematics, statistics, correlation, data mining and predictive analysis
  • Experience with deep statistical insights and machine learning ( Bayesian, clustering, etc.)
  • Familiarity with AWS Cloud Computing including: EC2, S3, EMR.
  • Familiarity with Geospatial Analysis/GIS
  • Other experience with programming languages such as Java, Scala and/or C#
  • Proficiency using query languages such as SQL, Hive, and Presto
  • Familiarity with BDE (Spark/pyspark, MapReduce, or Hadoop)
  • Familiarity with software development tools and platforms (Git, Linux, etc.)
  • Proven ability to drive business results with data-based insights
  • Self-initiative and an entrepreneurial mindset
  • Strong communication skills
  • Passion for data

WHAT WE OFFER:

  • Competitive Salary
  • Medical, Dental and Vision
  • 15 Days of PTO (Paid Time Off)
  • Lunch provided 2x a week 
  • Snacks, snacks, snacks!
  • Casual dress code
Share this job:
Senior Software Engineer, Data Pipeline
java scala go elasticsearch apache-spark senior Dec 31 2019

About the Opportunity

The SecurityScorecard ratings platform helps enterprises across the globe manage the cyber security posture of their vendors. Our SaaS products have created a new category of enterprise software and our culture has helped us be recognized as one of the 10 hottest SaaS startups in NY for two years in a row. Our investors include both Sequoia and Google Ventures. We are scaling quickly but are ever mindful of our people and products as we grow.

As a Senior Software Engineer on the Data Pipeline Platform team, you will help us scale, support, and build the next-generation platform for our data pipelines. The team’s mission is to empower data scientists, software engineers, data engineers, and threat intelligence engineers accelerate the ingestion of new data sources and present the data in a meaningful way to our clients.

What you will do:

Design and implement systems for ingesting, transforming, connecting, storing, and delivering data from a wide range of sources with various levels of complexity and scale.  Enable other engineers to deliver value rapidly with minimum duplication of effort. Automate the infrastructure supporting the data pipeline as code and deployments by improving CI/CD pipelines.  Monitor, troubleshoot, and improve the data platform to maintain stability and optimal performance.

Who you are:

  • Bachelor's degree or higher in a quantitative/technical field such as Computer Science, Engineering, Math
  • 6+ years of software development experience
  • Exceptional skills in at least one high-level programming language (Java, Scala, Go, Python or equivalent)
  • Strong understanding of big data technologies such as Kafka, Spark, Storm, Cassandra, Elasticsearch
  • Experience with AWS services including S3, Redshift, EMR and RDS
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions

What to Expect in Our Hiring Process:

  • Phone conversation with Talent Acquisition to learn more about your experience and career objectives
  • Technical phone interview with hiring manager
  • Video or in person interviews with 1-3 engineers
  • At home technical assessment
  • Video or in person interview with engineering leadership
Share this job:
Software Engineering Manager
scala functional-programming http4s fs2 scala-cats manager Dec 30 2019

As an Engineering Manager on a services team for the Banno Platform at Jack Henry, you’ll get the chance to make a positive impact on people’s lives. We believe that the world is a better place with community banks and credit unions. Our mission is to build the technology that gives community banks and credit unions the tools they need to compete against the big banks.

Service teams create highly scalable public APIs used by millions of customers to normalize access to multiple banking systems for use in our mobile and online banking clients. You’ll work on a team deploying and monitoring their own services. Our platform is primarily functional Scala, followed by a few services written in Haskell, Node.js and Rust.

Ideal candidates are self-motivated, technically competent servant leaders with experience building, mentoring and growing their team. The first six months will be spent as an individual contributor engineer on the team, learning the domain and building trust with team members.

We are committed to creativity, thoughtfulness, and openness. Our team is highly distributed, meaning you will work with kind, talented engineers from across the United States. Occasional travel may be required for professional development conferences or company meetings.

This is a remote position with the ability to collocate at several JHA locations nationwide if desired.

Minimum Qualifications

  • Minimum 7 years of experience with server-side programming languages.
  • Minimum 1 year of team lead, supervisory or management experience.
  • Minimum 1 year developing, maintain, and supporting public facing API in production.
  • Knowledge of or experience with microservice architecture in a production environment.

Preferred Qualifications

  • Experience with Scala or Haskell in a production environment.
  • Understanding of the functional programming paradigm.
  • Experience with the cats, fs2, http4s, and doobie libraries.
  • Experience with tools like Kafka, Kinesis, AWS Lambda, Azure Functions.
  • Experience with Kubernetes.

Essential Functions

  • Oversees the daily operation of one or more engineering teams.
  • Assists team in the development and implementation of policies, procedures and programs.
  • Mentors, coaches and assists in the career development of team members and participates in frequent one-on-ones.
  • Completes product technical design and prototyping, software development, bug verification and resolution.
  • Performs system analysis and programming activities which may require research.
  • Provides technical/engineering support for new and existing applications from code delivery until the retirement of the application.
  • Provides reasonable task and project effort estimates.
  • Ensures timely, effective, and quality delivery of software into production.
  • Develops and tests applications based on business requirements and industry best practices.
  • Creates required technical documentation.
  • Periodically troubleshoots during off hours for system failures.
  • Participates in an on-call rotation supporting team owned services.
  • Collaboratively works across teams to ensure timely delivery of high-quality products.
  • Collaboratively works with customer support team to resolve or diagnose defects.
Share this job:
Senior Machine Learning - Series A Funded Startup
machine-learning scala python tensorflow apache-spark machine learning Dec 26 2019
About you:
  • Care deeply about democratizing access to data.  
  • Passionate about big data and are excited by seemingly-impossible challenges.
  • At least 80% of people who have worked with you put you in the top 10% of the people they have worked with.
  • You think life is too short to work with B-players.
  • You are entrepreneurial and want to work in a super fact-paced environment where the solutions aren’t already predefined.
About SafeGraph: 

  • SafeGraph is a B2B data company that sells to data scientists and machine learning engineers. 
  • SafeGraph's goal is to be the place for all information about physical Places
  • SafeGraph currently has 20+ people and has raised a $20 million Series A.  CEO previously was founder and CEO of LiveRamp (NYSE:RAMP).
  • Company is growing fast, over $10M ARR, and is currently profitable. 
  • Company is based in San Francisco but about 50% of the team is remote (all in the U.S.). We get the entire company together in the same place every month.

About the role:
  • Core software engineer.
  • Reporting to SafeGraph's CTO.
  • Work as an individual contributor.  
  • Opportunities for future leadership.

Requirements:
  • You have at least 6 years of relevant work experience.
  • Deep understanding of machine learning models, data analysis, and both supervised and unsupervised learning methods. 
  • Proficiency writing production-quality code, preferably in Scala, Java, or Python.
  • Experience working with huge data sets. 
  • You are authorized to work in the U.S.
  • Excellent communication skills.
  • You are amazingly entrepreneurial.
  • You want to help build a massive company. 
Nice to haves:
  • Experience using Apache Spark to solve production-scale problems.
  • Experience with AWS.
  • Experience with building ML models from the ground up.
  • Experience working with huge data sets.
  • Python, Database and Systems Design, Scala, TensorFlow, Apache Spark, Hadoop MapReduce.
Share this job:
Data Engineer
python pyspark sql aws scala Dec 25 2019
  • Solid programming background in Python
    Experience extracting and loading data to relational databases and optimizing SQL queries
    Familiar with the Hadoop ecosystem, mainly with HDFS, the Hive and Spark: we do pyspark but Scala would also be considered
    Experience with these AWS services: Glue, Athena, Lambda, EMR
    Knowledge of orchestration tools such as Airflow, Oozie, AWS Step Functions
    Nice to have:
    Experience with Kafka and Kinesis
  • Proficiency in English and Spanish.
Share this job: