Remote kubernetes Jobs

Yesterday

Software Engineer for DevOps/SRE
Infiot  
google-cloud-platform kubernetes python azure devops cloud Feb 26

At Infiot we run a state of the art cloud native infrastructure that supports our edge computing solution. We are looking for a Software Engineer passionate for automation who will be our first SRE hire. The qualified candidate will work with an experienced development team to address our needs in the DevOps/SRE area and help shape our culture.

If you want more details on what this role is about, why we are looking for a software engineer and why this is not an "Ops job" please read the Google SRE Books.

Share this job:

This Month

.NET Engineer
asp.net-core azure-devops microservices .net-core kubernetes sysadmin Feb 19

Brushfire is looking for a .NET Engineer (U.S. Only) who is

  • well-versed in large-scale application infrastructure and design,
  • familiar with web application development patterns and practices,
  • driven to create captivating and interactive web experiences,
  • skilled in layout and has an eye for attention to detail,
  • experienced in building published, high quality web sites and applications,
  • stimulated by collaborating with a team to define, design and ship new features,
  • excited to work for our primarily Christian church/ministry customers.

Your primary task will be to utilize your knowledge/problem-solving skills to work alongside and with our existing developers as we create, maintain, and enhance our large-scale web applications. You should be on the cutting edge of emerging standards, technologies, and tools while being well-versed in cross-platform, multi-cloud development of complex, highly-available systems. Ultimately, we want someone who will take pride in learning quickly and contributing fully to create an experience our users will love. If that sounds like you, then you sound like us!

We value your time and efforts, so compensation is commensurate with experience and includes benefits. We thrive in a completely remote work environment -- with no central office -- where people learn and grow with the company. We are collaborative, creative, and innovative, with each person expected to contribute to meaningful outcomes.

Successful applicants will be asked to show proof that they can legally work in the US. Though we are remote, applicants closer to our teams in Fort Worth, Texas will be shown preference.

Things you'll be doing:

  • Build and maintain multi-tiered systems and microservices using C#, ASP.NET, JavaScript, HTML, T-SQL, Docker, and Kubernetes
  • Design Serverless Functions and Web Sites
  • Design and consume HTTP REST APIs
  • Build and maintain devops pipelines for CI/CD
  • Code web applications using ASP.NET MVC based upon approved designs
  • Collaborate closely with other developers and graphic designers
  • Participate on multiple projects from concept through completion without continual supervision.
  • Provide quality control over in both code and visual concepts/designs.
  • Potentially lead sessions in your areas of strength while supervising and inspiring those involved in your process.

Things you really need to have:

  • Demonstrable experience coding complex web applications in ASP.NET
  • Demonstrable experience with Continuous Integration tools like Jenkins and Azure DevOps
  • Demonstrable experience with Kubernetes and Docker
  • Strong object-oriented programming skills and familiarity with software design patterns.
  • Strong knowledge of SQL/relational databases
  • Familiarity with distributed version control systems.
  • Ability to communicate fluently, pleasantly, and effectively—both orally and in writing, in the English language—with customers and co-workers.
  • Passion, integrity, and energy

Things we think are cool for you to have, but aren't deal breakers:

  • Bachelor's degree in Computer Science or related field
  • Experience with wireframing/mockup tools (InVision/Sketch)
  • Experience with React, Angular, or equivalent frameworks
  • Familiarity with non-structured persistent document data stores (NoSQL)
  • Demonstrable experience on past projects (via Github, BitBucket, Google Code, etc). A candidate with an active commit history at a site like these will be favored over a candidate without similar history.
Share this job:
DevOps Engineer
Plutora  
aws devops amazon-cloudformation docker kubernetes python Feb 13

About the Opportunity

Do what you love and make an impact. Become a Plutorian!

We are in the business of helping companies become software juggernauts. 
Over the last 7 years we have built an enterprise SAAS platform used by many of the world’s largest companies (Vodafone, Telefonica, Westpac, Pepsico, Barclays... just to name a few)

We are fully cloud based and are about to embark on our next phase of cloud adoption (automation and scale).  We are looking for a DevOps ninja to join our team and work with our squads to deliver improved cloud-based architecture and new capabilities leveraging the AWS platform.

To support business growth, you will join Plutora’s well-established technology team working on existing platforms and implementing new technologies and features across our cloud infrastructure.  We run cross functional squads consisting of product, engineering, QA and DevOps. You will play a crucial role to bring your knowledge and experiences to help us achieve greatness.

Key Responsibilities:

  • Design, configure and maintain our cloud infrastructure to support our distributed systems architecture at scale 
  • Work with architects and engineers to design and build the next generation of our cloud infrastructure
  • Help to build the perfect CI/CD pipeline allowing engineers to push their changes to production with high confidence 
  • You will help monitoring solutions that keeps our systems health in check and ensure our customers come first
  • When needed, work with our Global customers to ensure our platform is integrated and working as required.
  • Help troubleshoot and optimize our backend services
  • Implement security best practices and industry standards 
  • Making a positive and proactive contribution working within a multi-disciplined team while keeping up to date with the latest tools, techniques and best practices.

About You

We are looking for passionate, enthusiastic and proactive people that want to build their future with us.

You will bring a culture of automation first with what you do as well as a collaborative approach to working with engineering teams. Working closely with each of the product squads you will ensure that our CI tools, processes and methods are enabling us to deliver world-class cloud services and will ensure a smooth transition from development to production environments.

This position is for a passionate DevOps engineer with significant hands-on DevOps experience implementing and managing automated infrastructure and code deployments.


Skills/Knowledge/Experience/Qualifications:

  • Ability build, configure, support and develop AWS cloud platforms and associated systems.
  • Certifications across AWS and Microsoft (essential) and DevOps tooling Puppet, Octopus, Jenkins
  • Technically proficient in AWS services (EC2, ELB, RDS, EBS, Lambda, S3 etc)
  • Programming languages and scripting - PowerShell, BASH, Python Perl etc
  • Highly experienced with infrastructure and automation tooling such as Jenkins, Puppet, Terraform, Chef, Ansible, etc.
  • Containerisation tooling - Kubernetes, Docker
  • Proven ability to provision and configure AWS cloud based resources to manage complex application environments
  • Strong skills with source code control (Git), and CI/CD tools
  • Experience troubleshooting of common web server, caching and database services; 
  • Security exposure using WAF technologies such as CloudFlare;
  • Networking experience is essential; (Firewalls, Routers, Load Balancers, Routing) 
  • Implement and manage orchestration tools and develop automating systems.
  • High availability, developing and maintaining resilient services.
  • Good communication skills and ability to work with key stakeholders to produce a desired outcome.
Share this job:
Lead Software Engineer
python flask sqlalchemy rest kubernetes machine learning Feb 13

Carbon Relay is a world-class team focused on harnessing the power of machine learning to optimize Kubernetes. Our innovative platform allows organizations to boost application performance while keeping costs down. We recently completed a major fundraising round and are scaling up rapidly to turn our vision into reality. This position is perfect for someone who wants to get in on the ground floor at a startup that moves fast, tackles hard problems, and has fun!

We are looking for a Lead Software Engineer to spearhead the development of our backend applications. You will bridge the gap between the machine learning and Kubernetes teams to ensure that our products delight customers and scale efficiently.

Responsibilities

  • Developing our internal APIs and backend
  • Designing and implementing SaaS-based microservices
  • Collaborating with our infrastructure, machine learning and Kubernetes teams

Required qualifications

  • 10 + years experience in software engineering
  • Proficiency in Python
  • Experience shipping and maintaining software products

Preferred qualifications

  • Experience with JavaScript
  • Experience with GCP/GKE
  • Familiarity with Kubernetes and containerization

Why join Carbon Relay

  • Competitive salary plus equity
  • Health, dental, vision and life insurance
  • Unlimited vacation policy (and we do really take vacations)
  • Ability to work remotely
  • Snacks, lunches and all the typical benefits you would expect from a well-funded, fun startup!
Share this job:
Snr Site Reliability Engineer - Quizlet
python go kubernetes docker aws testing Feb 12
  • Company: Quizlet.com
  • Technial Recruiting partner: SourceCoders.io
  • Location: Onsite in San Francisco or Denver or Remote for CST or EST based candidates 
  • Compensation: $120K-$200K (heavily dependent on experience and work location)
  • Work visas accepted: US Citizen, Green Card, H-1B transfer, TN Visa

Quizlet’s mission is to help students (and their teachers) practice and master whatever they are learning. Every month more than 50 million active learners from 130 countries practice and master more than 300 million study sets on every conceivable topic and subject. We are developing new learning experiences by modeling how students learn and drawing upon knowledge acquisition, retention, and pedagogy in cognitive science. We are always seeking to help students master any subject by optimizing study efficiency and engagement. Want to be a go-to person for site reliability on the most-used learning platform in the U.S.? Want to work on a service that is rapidly scaling and relied upon by millions of students and teachers worldwide?  Quizlet is an indispensable utility used daily by millions of students and teachers around the globe. If our site goes down, even just for a few minutes, the pain is felt intensely. Speed is crucial, and downtime is not an option as we grow — during the school year, we are in the top 20 most-visited websites in the U.S. These are challenges you will face on day one at Quizlet.

What you'll do

    • Engage with service owners to improve the entire service lifecycle — from inception and design, through deployment, operation, maintenance, and sunset.
    • Help service owners drive their services through the service lifecycle through activities such as system design consulting, developing software platforms and frameworks, capacity planning and launch reviews.
    • Help service owners maintain their services once they are live by measuring and monitoring availability, latency, and overall system health.
    • Help scale systems sustainably through mechanisms like automation and evolve systems by pushing for changes that improve reliability and velocity.
    • Practice and evangelize sustainable incident response and blameless postmortems.

What we are looking for

    • Experience in designing, analyzing and troubleshooting distributed systems serving production traffic.
    • Experience with algorithmic thinking, data structures, and software complexity.
    • Experience in writing scripts in one or more languages such as Python or Go
    • Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive.
    • Ability and desire to debug and optimize code and automate routine tasks.
    • Experience with on-call duty, know why it’s hard, work to improve it, and make it so well documented that every engineer wants to be on rotation.
    • {Passion|Interest|Experience} with automation of code testing and deployment through the use of containers.
Share this job:

This Year

Linux System Administrator I
linux python kubernetes terraform ansible sysadmin Jan 26

What you get to do:

  • On-going Sysadmin work on bare metal and AWS instances.
  • Proactive system/network monitoring.
  • Proactive system capacity planning.
  • Deployment and task automation utilizing salt or other methods.
  • Issue/ Incident response.
  • Peer-level technical support
  • Work with the compliance team to implement controls and other security needs.
  • Continuous improvement of the system as a whole.

What you bring to the team: 

  • As a result of a government contract in which this position will be involved, this role requires U.S. citizenship.
  • Bachelor's degree in CS, IS, IT or equivalent required
  • Strong interpersonal, verbal and written communication skills with the ability to keep key stakeholders informed in a timely manner
  • 2+ years of Linux administration and engineering experience.
  • 1+ years Python development experience or an equal level of proficiency in another language. For candidates that have experience with languages other than Python.
  • Experience leading, or participating in, the implementation of at least one configuration management system
  • Experience deploying a host virtualization platform and containers.
  • Experience with fully implemented automation workflows.
  • Web Server experience (SAAS experience a huge plus)
  • Motivated self-starter with ability to self-manage and work well independently, as well as in a team within a dynamic environment.
  • Strong application, operating system, kernel, and hardware debugging skills.

Why work for us:

  • Competitive salary and an unlimited PTO policy.
  • 401k match which is fully vested after 3 years of service
  • A dynamic and fun work environment with passionate, top-notch colleagues
  • An opportunity to join a company at an inflection point in its growth pathway
  • Leadership that is invested in hearing your ideas and in your professional growth
Share this job:
Senior Java Software Engineer
Anonos  
java spring apache-spark docker kubernetes senior Jan 22

We are looking for a Senior Software Engineer to join the Anonos BigPrivacy team.

As a member of our engineering team, you will have responsibility over the ongoing development and maintenance of state-of-the-art data privacy software. You will make expert design decisions and technology recommendations based on your broad knowledge of modern software development.

We are a 100% remote organization. We use Slack and Zoom for communication, Ansible, TravisCI and AWS for CI/CD, and GitHub/ZenHub for tracking user stories. We work using the Kanban methodology, with monthly releases, and have regular backlog grooming meetings and retrospectives to continuously improve our processes.

Our software is implemented in Java, Kotlin, and JavaScript (Node.js). We are looking for someone with expert level knowledge of Java or Kotlin, and have an interest in working with server-side JavaScript. You should also be comfortable automating tasks, writing shell scripts, and working with Linux servers and cloud environments (primarily AWS). Some other technologies we use: Docker, Kubernetes, Apache Spark, Cassandra, Apache Kafka, MongoDB, React.js, Spring framework.

Anonos takes pride in its high-quality software so you must be committed to a high standard of development and testing. We expect you to think about programming tasks critically and develop code that is clean, reusable, efficient, well-documented, and well-tested. If you can explain what the SOLID principles are and why they are beneficial, how to properly go about refactoring, and compare and contrast various testing frameworks, then you will likely be a good fit for our team.

We are interested in speaking with exceptional people who can bring the following to the team:

- 8+ years of Java software development experience
- Expert-level proficiency with object-oriented design and programming
- 100% committed to test-driven development, this is your preferred practice for developing software

- Experience working with the Apache Spark data processing framework

- Experience with the Spring framework and Spring Boot applications
- Interest in learning new technologies and tools (especially related to big data)
- Comfortable working in an Ubuntu Linux server environment
- Proficiency with Git, Maven and Linux

Share this job:
Senior Back End DevOps Engineer
aws security kubernetes shell python devops Jan 16

As more companies adopt public cloud infrastructure and the increase sophistication and harm caused by cyber attacks, the ability to safeguard companies from these threats have never been more urgent.  

Lacework’s novel approach to security fundamentally converts cyber security into a big data problem.  They are a startup based in Silicon Valley that applies large scale data mining and machine learning to public cloud security.  Within a cloud environment (AWS, GCP, Azure), their technology captures all communication between processes/users/external machines and uses advanced data analytics and machine learning techniques to detect anomalies that indicate potential security threats and vulnerabilities. The company is led by an experienced team who have built large scale systems at Google, Paraccel (Amazon Redshift), Pure Storage, Oracle, and Juniper networks.  Lacework is well funded by a tier one VC firm and is based in San Jose, CA.

They are looking for a Senior DevOps engineer with strong AWS and Kubernetes experience who is excited about building an industry leading, next generation Cloud Security System.

You will be a part of the team that architects, designs, and implements highly scalable distributed systems that provide availability, scalability and performance guarantees. This is a unique and rare opportunity to get in on the ground floor and help shape their technologies, products and business.

Roles/Responsibilities

  • Assist in managing Technical Operations, Site Reliability, production operations and engineering environments 
  • Run production operations for their SaaS product
    • Manage the monitoring System
    • Debugging live production issues
    • Manage Software release roll-out
  • Use your engineering skills to promote platform scalability, reliability, manageability  and cost efficiency
  • Work with the engineering and QA teams to provide your valuable feedback about how to improve the product
  • Participate in on-call rotations (but there is really not a lot of work since you will automate everything!)

Requirements:

  • 4+ years of relevant experience (Technical Operation, SRE, System Administration)
  • AWS experience 
  • Experienced Scripting skills Shell and / or Python 
  • Eager to learn new technologies
  • Ability to define and follow procedures
  • Great communication skills
  • Computer Science degree 
Share this job:
Lead DevOps Engineer - The RealReal
docker devops aws terraform kubernetes python Jan 11

The RealReal (NASDAQ: REAL) is leading the way in authenticated luxury consignment, online and in real life at our brick and mortar locations. Founded in 2011, we’re growing fast and fundamentally changing the way people buy and sell luxury — a multi-billion dollar industry. With a team of in-house experts who inspect every item we sell, our commitment to authenticity sets us apart and creates a foundation of trust with shoppers and consignors. Our mission to extend the life cycle of luxury items is leading innovation in sustainable fashion. We’re proud to promote the circular economy and to be the first luxury member of the Ellen MacArthur Foundation’s prestigious CE100 USA.

Employees at The RealReal are dedicated, collaborative and innovative, and we’re looking for exceptional talent to join our team. Build your career with us and enjoy 401K matching, health, dental and vision insurance, commuter flex spending, healthcare flex spending, generous PTO, a mother’s room and flexible work hours!

The DevOps team is seeking Senior Engineers to tackle an ambitious project pipeline including solutions for disaster recovery, dynamic development environments, and more. Bring your thorough, practiced understanding of DevOps, Cloud Infrastructure, and Site Reliability to this te

DUTIES & RESPONSIBILITIES

  • Maintain and evolve the production infrastructure, strategically employing automation, and infrastructure-as-code
  • Build and maintain code pipelines, designing and building automation in order to enable agile software development, using self-service where possible
  • Collaborate efficiently and effectively with Engineers and Product teams on complex problems
  • Build and contribute to infrastructure services, keeping with 12 Factor methodology
  • Quickly absorb context and tribal knowledge while ramping up and using that to build or bolster documentation
  • Keep a strong level of quality and velocity in your work, while collaborating and reporting when appropriate
  • Exercise and promote security best practices throughout your workflow
  • Participate in an on-call rotation on a regular basis and respond to incidents reliably and professionally

REQUIREMENTS

  • 4+ years experience in Site Reliability engineering and Cloud administration
  • 2+ years automation experience using popular languages (bash, python, etc); software development background a plus
  • 2+ yrs professional experience with UNIX-based Operating Systems
  • Experience building Continuous Integration / Continuous Deployment (C/CD) workflows
  • Experience tuning and troubleshooting performance for high traffic web services
  • Proficient with crafting concise and professional communications during emergency production infrastructure incidents
  • Strong understanding of the software development lifecycle
  • Strong understanding of common network protocols, including HTTP, HTTPS, TCP, SSL/TLS, and relevant diagnostic tools
  • Database fundamentals and experience with MySQL or PostgreSQL
  • Git and Github workflows
  • Understanding of packaging, deployment, and support of containerized (Docker) applications

NICE TO HAVE

  • Experience deploying microservices environments
  • Experience converting applications to run in Docker containers, and with orchestration layers
  • Experience using Terraform with multiple providers and/or integrated with a Build/Release system
  • Computer Science or Engineering degree
Share this job:
Senior API Engineer
Redox  
typescript node-js postgresql apache-kafka kubernetes api Dec 26 2019

We at Redox understand that we are all patients, and our mission is to make healthcare data useful and every patient experience better. Our values represent the basis of our culture of trust, transparency, and personal growth, and define how we want to interact with each other and the world.

Redox’s full-service integration platform accelerates the development and distribution of healthcare software solutions by securely and efficiently exchanging healthcare data. With just one connection, data can be transmitted across a growing network of more than 500 healthcare delivery organizations and more than 200 independent software vendors. Members of the Redox Network exchange more than seven million patient records per day, leveraging a single data standard compatible with more than 40 electronic health record systems. We are on a path to double our number of client connections over the next year and need to continue to build applications that scale accordingly.

About the Team

Our applications provide ingress and egress pathways that are responsive to the communication and data format needs of our customers, all while being resilient to our scaling needs to process millions of records per day. Our engineering teams own their solutions, enjoying the autonomy to design and implement the technical solutions to the hard problems presented by the myriad ways of exchanging healthcare data. 

A sampling of the technologies we use to implement these solutions include:

  • Libraries and µ-services built using TypeScript/NodeJS
  • Data management using Postgres, Kafka, and Redis
  • Horizontally scalable containerized deployments using Docker, Rancher, and Kubernetes
  • Application monitoring using InfluxDB, Grafana, and SumoLogic

An impactful engineer will:

  • Collaborate with other team members to continue to scale our architecture, taking into account the needs of today while remaining flexible enough to evolve for the needs of tomorrow
  • Work within a µ-service architecture, creating new solutions and decomposing our existing monolith
  • Own projects from end to end, executing on designs involving multiple fellow engineers
  • Participate in all phases of the SDLC - from requirements, design, and development through testing, deployment, maintenance, and support
  • Create RESTful APIs that adhere to best practices, as well as build out tolerant async models of communication
  • Understand the tension between an ideal end state and delivering value quickly and effectively prioritize between those options
  • Bias towards action while solving the biggest problems in sight

Your day to day will consist of:

  • Pairing with other team members, embracing a culture of collaboration and mutual respect to solve complex problems
  • Designing solutions to feature requirements as prioritized by Product Management
  • Implement solutions through the entirety of the SDLC, including testing, deployment, and operationalization
  • Create generic solutions to complex problems
  • Reviewing fellow engineer’s code prior to shipping
  • Troubleshooting production issues as they arise and building a more fault-tolerant system

Preferred Experience

  • 5+ years professional software development experience in one or more modern general purpose languages (Javascript/Typescript, C#, etc). Your day to day development will be spent working with NodeJS and TypeScript, but we are more interested in your ability to solve problems than the language used to solve those problems.
  • Experience writing horizontally scalable applications
  • Experience working with relational data stores
  • Experience integrating with 3rd party APIs
  • Strong desire to expand your professional impact and autonomy
  • Healthcare IT
  • Excellent written and oral communication skills, including pairing
  • Effectively give and receive feedback, both positive and constructive
  • Operate effectively on a remote team
  • Experience leading a team is a PLUS

It is not expected that any single candidate would have expertise across all of the areas outlined above. Please apply even if you are not sure you meet all these criteria. If you are interested in the role and think it could be a fit, we'd like to hear from you.

Share this job:
Senior Dev Ops Engineer
aws docker pulumi kubernetes terraform ops Dec 23 2019

What we are looking for:

The Senior DevOps Engineer is a high-impact role where you will work closely with the software engineering teams to help them deploy critical software to AWS. This position will have significant impact on building out our growing infrastructure, implementing infrastructure-as-code with Pulumi, and leveraging Kubernetes for orchestration of Docker containers.

A passion for data security is key as you will be frequently dealing with patient data that falls under HIPAA regulations. Your desire and ability to collaborate, mentor, and learn is critical to your success at IDx. You must have a demonstrable enthusiasm for good DevOps practices and an irresistible urge to share them with others. You are someone who identifies issues early and brings them to the table along with multiple solutions. You believe that continuous improvement is key to the difference between good and great and can inspire others to follow your example. You love learning and teaching and find satisfaction in multiplying the effectiveness of those around you.

To be successful in this role you must be able to:

  • Apply best practices and help others understand their importance.
  • Effectively document the architecture, design and functionality of implementations.
  • Have awareness of new trends, technologies, and tools and understand when to apply them.
  • Communicate complicated technical concepts to diverse audiences.
  • Make others better through documentation, technical guidance, and mentoring.

Requirements:

  • Strong analytical skills with great verbal and written communication.
  • Experience architecting, designing, implementing, and deploying complex solutions.
  • Experience with both Linux and Windows.
  • Experience with Docker and Kubernetes.
  • At least 5 years’ DevOps experience.
  • 3+ years’ experience building "Infrastructure as Code" with a strong understanding of Amazon Web Services (AWS) Experience making decisions that balance the tradeoffs between technical and business needs.
  • Applicants must not now or in the future need IDx to sponsor an immigration case for employment (for example, H-1B or other employment-based immigration cases).

What will help you in this role:

  • Experience delivering highly secure and efficient solutions within comprehensive compliance regulations.
  • AWS Certified Security Specialty.
  • Experience with infrastructure-as-code tools such as Pulumi or Terraform.
  • Experience working with data in a HIPAA compliant environment.
  • Knowledge of or interest in the medical device software development industry.
  • Experience working in an ISO- or FDA-regulated environment, or another highly regulated environment (FAA, etc.) and working with Quality Management Systems.

IDx is a challenging and rewarding environment that provides amazing opportunities to:

  • Work on unique opportunities that will be hard to find at other companies.
  • Work on the first ever autonomous AI system cleared by the FDA to provide a diagnostic decision.
  • Work with world-renowned doctors who are pushing the limits of machine learning in medicine.
  • Tackle complex problems/projects with the highest levels of quality and execution for audiences that include top technologists, the FDA, and world-leading healthcare providers.
  • Push the accessibility and quality of healthcare to new heights to improve the lives of millions of people.
Share this job:
Kubernetes Solution Engineer US East Coast
cloud go kubernetes healthcare Dec 21 2019

We are looking for a Kubernetes Solution Engineer on the US East Coast

Giant Swarm is a fast-growing open-source infrastructure management platform used by modern enterprises. Our vision is to empower developers around the world to ship great products.

You are the voice of our existing and prospective US customers inside Giant Swarm and the voice of Giant Swarm towards these customers, helping both sides to be successful.

We're a distributed, diverse, and growing team spread across Europe, with a small office in our headquarters in Cologne, Germany. With under 5% of the company working there, all workflows are created to function remotely - but of course, if you want to visit Cologne, you are more than welcome! You will also find someone in 15 other countries though.

While we are remote-first, we appreciate quality time with our co-workers, so we meet in person twice a year to work and have fun together.

Work-life integration

  • Flexible working hours, and working from home or anywhere you prefer but please note that your permanent location should be somewhere on the US East Coast.
  • Currently, the number of kids from our team members outnumbers the number of employees.
  • We don’t only care about the kids “within” the company, but also about all children - for example, we compensate the carbon of all our flights.
  • As an international company, we want to create similar standards for everyone, regardless of location. So, additional perks (for example, a location-aware, fixed amount paid each month to cover costs like co-working, phone contracts or gym memberships), paid parental leave and healthcare compensation are compulsory.

Your Job:

  • We have recently won our first US-based client and are planning on expanding this area further.
  • Together with our first US based Platform Engineer, you’ll be spearheading our future US team
  • You’ll be working closely with our cross-functional teams based in Europe
  • In close cooperation with our US customers you’ll understand their architecture and further their understanding of Giant Swarm, helping them to get the most out of our product and the CNCF landscape, master Cloud Native tools like Kubernetes, Prometheus as well as Loki, Helm and others
  • Our solution engineers are becoming part of diverse open source communities around tools that our customers need and use, contributing back to those projects where possible. You build and maintain helm charts that might start out as a special project for one customer and end up being a managed service run by all our customers
  • You hold trainings and workshops at conferences, for our existing customers, as well as with Giant Swarm’s potential customers. Do note that the role requires minimum travel as most of our interactions as a fully remote company are via video conferencing
  • Learn from customer's problems with moving to microservices architectures, get your hands dirty and find out what Cloud Native project's entrails look like. The more successful our customers are with the Cloud Native journey, the more successful we are
  • Ultimately, you are also responsible for documentation, helping the product teams implement fixes, prioritizing features and making sure we only need to answer questions once
  • First impressions are important! You help create a great onboarding experience for our customers and become the main point of contact for them
  • To sum it up: Customer Success is all that matters. Within the solution engineering team, we make sure our customers are happy and taking the right steps going forward, at best you anticipate the problems they might face in the future

Requirements:

  • You have experience with more than one programming language, including Go, and a strong architectural background.
  • You feel at home in the Kubernetes world, especially the multitude of Open Source tools around Kubernetes and the Container World.
  • You can explain complicated things concisely and easily for a diverse audience.
  • You have a strong sense in providing an awesome customer service that makes life easier for Developers. Your communication skills (especially written form) are excellent. Very good English skills are a must-have. German skills are a bonus.
  • You are open to attend conferences with our team and hold talks in front of customers. You are happy to travel 5-10% per year
  • You are based on the US East Coast

Why we think this job is worth applying for (challenge us!)

Impact, Impact, Impact! We are a remote-first organization with a growing team from 15+ European countries. Every new team member changes the team. This is great! People who know things we don’t are highly welcome.

“It's easier to ask forgiveness than it is to get permission” (Grace Hopper) - sure, it’s not 100% like this, but we have a strong culture of failure which, is part of our agile mindset. We don’t do things like in the guidebook. You can try things out! Our default to 100% transparency will help you here.

We play a key role in our customers' digital transformation. We have partnered up with Amazon and Microsoft to provide our solution on their cloud platforms - more will follow.

We have been in this ecosystem from the get-go and as part of the CNCF family, we feel at home in the community. As a part of Giant Swarm, you will also join this extended family.

We serve some of Europe's leading organizations and are talking to many more.

WHY Giant Swarm?

We like to give you a glimpse on how working with is like:

Self-organization

Creative work needs freedom and openness. We encourage you to do your work wherever and whenever you want. We expect passion and encourage sustainability. If you need rest, take it. We don't count holidays - but we are also aware that this combined with remote work can also lead to working too much. So we encourage you to take holidays and help you to manage the freedom and flexibility.

Teamwork

We are a growing company with team members distributed all over Europe and plans on expanding to the US. Our ambitious goals are only achievable as a team. Everybody’s input is highly welcome and appreciated. Although sometimes rules and processes are necessary, we try to keep them as lean as possible. Always question the status quo and find new ways of collaboration and teamwork.

Learning

Learning is mandatory and fun at the same time. If you realize you want to expand your knowledge in a specific area, we support you with conferences, books etc.

Basics

We offer fair (transparent and open) salaries with benefits like choosing your own laptop, additional perks (for example, a location-aware, fixed amount paid each month to cover costs like co-working, phone contracts or gym memberships), paid parental leave and healthcare compensation are compulsory. And you will participate in our stock options program. Currently, our team members have more children than we are employees. So family-friendliness is a must.

We are not hiring job descriptions. We hire humans. :) We welcome applications from everybody, regardless ethnic or national origin, religion, gender identity, sexual orientation or age.


Interested? Questions? Contact Larissa or apply directly.

Share this job:
Software Engineer
react-js node-js typescript kubernetes graphql Dec 20 2019

[Note: Due to the holidays we will not be reaching out to qualified candidates until early January.]


We’re always looking for passionate software developers to contract with us on client projects. Projects vary in terms of industry, breadth, & duration in addition to tech platform so we’re always looking to build our roster of people with a variety of expertise and availability. Our current contract developer preferences include:

  • Proven experience with React.js, Node.js, and TypeScript
  • Nice to have: Experience with Apollo/GraphQL, Kubernetes, Terraform

Beyond those qualities, here’s a more general description of our developer role under any contract:

As a Developer, you'll be contributing to the successful delivery of digital products. You articulate the objective rationale behind your coding decisions, working with your teammates to ensure those align with business and audience needs. You'll be switching between setting direction, creating software, and helping your teams constantly improve. You handle it with ease through solid time management skills, enthusiastic client services, and an inspiring attitude. You thoughtfully apply the latest tools, trends, and practices of the development alongside those that are more tried-and-true.

In your day-to-day you will:

  • Collaborate with your project team to evaluate a product experience holistically, identify next steps and deliver the individual features that make up the product
  • Objectively explain and represent your contributions and deliverables to your team and client
  • Clearly communicate with your project team to make sure they have full context of your work and that you have the same of theirs, jumping in on other tasks as needed
  • Take on various tasks, development-related or not, to support different processes and needs on your project teams
  • Collaborate directly every day with designers, developers, engagement managers, and clients
Share this job:
Software Engineer
docker react-js cs kubernetes node-js javascript Dec 17 2019

POSITION SUMMARY

Working as Software Engineer at Thycotic means being a part of a highly capable, collaborative and agile developers. Thycotic is creating awesome software that's used by thousands of IT professionals all over the globe. 

This can be a remote/telecommute position based out of your home office. Reports to the Software Development Team Lead. 


KEY RESPONSIBILITIES

  • Be a fully contributing member of a Thycotic Sprint team
  • Work specifically on container technology as an enabler for our delivery strategy

SKILLS & REQUIREMENTS

  • 5 years+ experience in C#, .NET Core, MSSQ, and related technologies
  • Strong design skills and full software development lifecycle experience required
  • Experience with JavaScript and front-end JavaScript libraries like Angular/React is a BIG plus
  • Strong understanding of Object-Oriented principles, the .NET Framework, ASP.NET, relational databases, and web application development
  • Thorough knowledge of container concepts (LXD, Docker) and what it means to deliver enterprise products via containers
  • Ability to write container code for Docker, Docker Swarm, Kubernetes
  • Ability to configure and administer Nginx and Node.JS web
  • Substantial experience working on the command line with a ‘nix flavor or equivalent
  • Passionate about writing quality code and constantly honing your development skills
  • Ability to quickly read and understand both new and existing code
  • Ability to look at the big picture, come up with great new ideas, but also execute those ideas and write the code to make it happen
  • Seeking a highly collaborative, flat environment--there's not a lot of hierarchy or red tape here
  • Experience with large codebases and unit testing -- mock frameworks, web testing, database testing, etc.
  • Desire to develop using Test Driven Development
  • Comfortable developing using pair programming


WHY WORK AT THYCOTIC?

We’re passionate problem-solvers doing our part to make the world a safer place. We invest in people who are smart, self-motivated and collaborative.

What we offer in return is meaningful work, a culture of innovation and great career progression!

Thycotic was awarded “Best Places to Work” in 2019 in recognition of the hands-on experience and growth opportunities available here, as reported by employees.  A focus on employee advancement and our ethos of respect are just some of the reasons why people love working here!



Thycotic is an Equal Opportunity Employer and does not discriminate on the basis of race, ancestry, national origin, color, religion, gender, age, marital status, sexual orientation, disability, or veteran status.

Upon conditional offer of employment, candidates are required to complete a criminal background check and drug screen per Thycotic employment policy. In addition, all publicly posted social media sites may be reviewed.


V1.12.2019

Share this job:
DevOps Engineer
automation web-services aws kubernetes sysadmin devops Dec 16 2019

Are you an automation ninja? Can you replace people with scripts that you write? Does poorly engineered architecture give you nightmares?

If this describes you then please read on! Exodus uses multi-cloud hosted backend services, running numerous cryptocurrency coin nodes and services for our software, and we need ninjas to help us automate, monitor, maintain, and scale them.

What You Will Do

  • Engineer architectures and automation for zero-downtime deployments.
  • Use technologies like Terraform to manage infrastructure as code.
  • Work with Kubernetes and Prometheus to scale and monitor micro-services.
  • Use tools like Helm and GitlabCI to automate deployments.
  • Work with our development teams to help them set up automation pipelines and solve problems.
  • Collaborate with other DevOps engineers to make the best solution possible.
  • Build a geo-distributed infrastructure.
  • Participate in on-call schedules and act in a server/technical support capacity to the team.

Who You Are

  • You are based (or willing to work) in a timezone in USA or Eastern Asia between UTC +8 (Malaysia) and UTC -7 (Eg: Los Angeles).
  • You have excellent references and a history of trust and established relationships in former careers.
  • You have a long history and track record of DevOps that can be demonstrated via GitHub, blogs and/or in a technical interview process.
  • You take initiative and don't wait for direction.
  • You have a passion for cryptocurrencies like Bitcoin and a demonstrated passion for solving problems.
  • You don't care if the platform is Azure or AWS and are willing to work with either and have some experience with at least one.
  • You are willing to be available regardless of when server issues occur.

What We Offer

  • Freedom to work wherever you want, whenever you want.
  • Building the future. Cryptocurrencies lay the foundation of the internet of value, the next major wave in application technology and personal finance.
  • Collaborative and feedback-driven culture.
  • Opportunity to grow. The sky is the limit if you're hungry to succeed.
  • Fair pay, no matter where you live.
  • Competitive compensation package. (Including benefits for US employees only)

Our Hiring Process

At Exodus, we pride ourselves in hiring people from all around the world. We work with individuals from various backgrounds; some traditional and some a bit more unconventional.

Our hiring process focuses on 2 pillars.

  • Efficiency. You can expect the process to take between one and two weeks. We know what it’s like to wait weeks for a recruiter to get back to you and want to be respectful of your time.
  • Transparency. We anticipate you asking questions and will answer with the utmost candor.

We are committed to shaping a better world and have built our team based on empathy, radical candor, initiative, and humility.

Overall, our goal is that you have a great candidate experience with us.

Share this job:
Cloud - Technical Lead
Elastic  
elasticsearch cloud saas paas kubernetes java Dec 14 2019

Elastic is a search company with a simple goal: to solve the world's data problems with products that delight and inspire. As the creators of the Elastic Stack, we help thousands of organizations including Cisco, eBay, Grab, Goldman Sachs, ING, Microsoft, NASA, The New York Times, Wikipedia, and many more use Elastic to power mission-critical systems. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. We have a distributed team of Elasticians across 30+ countries (and counting), and our diverse open source community spans over 100 countries. Learn more at elastic.co

About The Role

The Cloud team is responsible for the development of products such as Elastic Cloud Enterprise, Elastic Cloud on Kubernetes as well as the operation of our Elastic as a Service offering. The SaaS offerings are built on top of the cloud-products.

We are looking for a technology and engineering leader to help us realize our Cloud team’s goals. You will be responsible for technical design for new features and improving existing subsystems, and work with several functional areas in Cloud. Your responsibilities will include technical leadership that will enable Elastic products to be metered, reported for usage, and generate monthly invoices. The areas you will work with are impactful to Elastic - they contribute to Elastic’s SaaS consumption-based billing platform, chargeback features for on-premise product, and integration with AWS/GCP/Azure marketplaces. The data ingestion system we build to power these features process a critical stream of events with low-latency and real-time requirements.

You will participate in roadmap and project planning efforts and will have ownership for delivering it. You’ll be participating in project management efforts as the teams execute on plans, and you’ll have a role in communicating progress and status to various teams at Elastic.

Some of the things you'll work on

  • Provide technical leadership across several functional areas in the team like metering, usage reporting, billing and invoicing, and marketplace integration across all products in Elastic
  • Work with Product Management to define new consumption models that will increase the flexibility of our SaaS offering, attracting more users. Your contributions will play a role in improving our conversion rates, and improving upgrades to higher subscription tiers
  • Work on a global scale, with all the major Cloud hosting providers: AWS, GCP, Azure, IBM Cloud etc and their marketplace solutions.
  • Work on creating a stable, scalable and reliable data ingestion pipeline built using Elastic products to harvest usage and telemetry data from multiple products.
  • Have a scope that covers contributing to technical plans and direction within Cloud and across other product teams in Elastic
  • Be a contact point in Cloud for other teams within Elastic. Examples include helping Support with difficult cases or consulting the Elastic Stack engineers with designing new features in a Cloud compatible way
  • Understand our company strategy and help translating it into technical terms and guide our Cloud product’s direction to realize it
  • Create technical designs and build POCs for new efforts, validating a wild idea works before committing to it
  • Collaborating with other Tech Leads on the Cloud team and across Elastic to align priorities and roadmaps, and make appropriate technology choices and compromises
  • Be hands on with the codebase. Review work done in the team, and provide constructive feedback
  • Help the team define coding practices and standards

What you will bring along

  • Previous experience providing pragmatic technical leadership for a group of engineers
  • Previous experience in a role with ownership for technical direction and strategy, preferably in a start-up or scale-up environment
  • Experience designing data pipelines that ingest logs or metrics data from distributed systems
  • Proven experience as a software engineer, with a track record of delivering high quality code, preferably in Python, Java or Go
  • Experience implementing or deep knowledge of consumption-based SaaS billing platforms with features like overages, discounts, monthly and annual models, etc
  • Previous experience working with various partners outside of Engineering such as IT and Finance Operations teams
  • Technical depth in one or more technologies relevant for SaaS (orchestration, networking, docker, etc.)
  • Deal well with ambiguous problems; ability to think in simple solutions that reduce operational overhead and improve code maintainability
  • Interest in solving challenges in SaaS billing platform, in terms of accuracy, scale, and features that make users life easier

Nice to have

  • Experience with Elasticsearch as a user - understanding data modeling, aggregations and querying capabilities
  • Experience integrating applications with AWS, GCP or Azure marketplace solutions
  • Integrating with Cloud billing providers such as Stripe or Zoura

#LI-CB1

Additional Information - We Take Care of Our People

As a distributed company, diversity drives our identity. Whether you’re looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life. Your age is only a number. It doesn’t matter if you’re just out of college or your children are; we need you for what you can do.

We strive to have parity of benefits across regions and while regulations differ from place to place, we believe taking care of our people is the right thing to do.

  • Competitive pay based on the work you do here and not your previous salary
  • Health coverage for you and your family in many locations
  • Ability to craft your calendar with flexible locations and schedules for many roles
  • Generous number of vacation days each year
  • Double your charitable giving — we match up to 1% of your salary
  • Up to 40 hours each year to use toward volunteer projects you love
  • Embracing parenthood with minimum of 16 weeks of parental leave

Elastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.

When you apply to a job on this site, the personal data contained in your application will be collected by Elasticsearch, Inc. (“Elastic”) which is located at 800 W. El Camino Real, Suite 350 Mountain View, CA 94040 USA, and can be contacted by emailing jobs@elastic.co. Your personal data will be processed for the purposes of managing Elastic’s recruitment related activities, which include setting up and conducting interviews and tests for applicants, evaluating and assessing the results thereto, and as is otherwise needed in the recruitment and hiring processes. Such processing is legally permissible under Art. 6(1)(f) of Regulation (EU) 2016/679 (General Data Protection Regulation) as necessary for the purposes of the legitimate interests pursued by Elastic, which are the solicitation, evaluation, and selection of applicants for employment. Your personal data will be shared with Greenhouse Software, Inc., a cloud services provider located in the United States of America and engaged by Elastic to help manage its recruitment and hiring process on Elastic’s behalf. Accordingly, if you are located outside of the United States, your personal data will be transferred to the United States once you submit it through this site. Because the European Union Commission has determined that United States data privacy laws do not ensure an adequate level of protection for personal data collected from EU data subjects, the transfer will be subject to appropriate additional safeguards under the standard contractual clauses. You can obtain a copy of the standard contractual clauses by contacting us at privacy@elastic.co. Elastic’s data protection officer is Daniela Duda, who can be contacted at daniela.duda@elastic.co. We plan to keep your data until our open role is filled. We cannot estimate the exact time period, but we will consider this period ended when a candidate accepts our job offer for the position for which we are considering you. When that period is over, we may keep your data for an additional period no longer than 3 years in case additional opportunities present themselves in which yours skills might be better suited. For additional details, please see our Elastic Privacy Statement https://www.elastic.co/legal/privacy-statement.

Share this job: