Remote big-data Jobs

Yesterday

Data Engineering Course Teaching Assistant
python big data cloud Aug 11
Springboard runs an online Data Engineering Career Track in which participants learn with the help of a curated, project-based curriculum and 1-1 guidance from an expert mentor.

Our mentor community - the biggest strength of our programs - comprises experts from the best organizations in the world. The Data Engineering Career Track will have Data Engineers, Data Developers, and other leading data experts at premier companies (e.g. Uber, Pandora, LinkedIn, Apple) and top-notch startups.

Data Engineering Career Track is a 6-month intensive bootcamp and we are looking for Course Teaching Assistants to help support students outside of their mentor calls.

Requirements

  • Are as passionate about teaching Data Engineering as about Data Engineering itself
  • Professional experience with SQL and an OOP language like Python or Java
  • Have an excellent understanding of fundamental data engineering skills, tools, and concepts including working with Big Data (Hadoop, Spark), Cloud platforms (AWS, Azure), building data pipelines (batch/streaming, APIs), orchestration (Airflow), and containerization (Kubernetes, Docker).
  • You have a flexible schedule that allows you to work a few hours per day, at least 5 days per week
  • Are able to provide support via email and Slack
  • Are empathetic and have excellent communication skills, able to break down complex concepts for beginners
  • Bonus points for experience conducting reviews on through online platforms!

Benefits

  • Membership in an rich community of expert mentors from great companies like Apple, Uber and Pandora
  • Change the lives of students in our program and help us revolutionize online education!
  • Receive a monthly amount per-project honorarium and work at your convenience

More details

  • Completely online and self-paced. Coursework is ~200 hours and on average, students finish it in 6 months.
  • Participants in this course are working professionals and college students from all over the world, interested in getting started with data science.
  • Participants learn about Data Engineering with the help of a curated online curriculum and a personal mentor. They go through the curriculum at their own pace and have a weekly 30-minute checkin with their mentor to discuss questions, projects, and career advice!
  • Course TA's provide students with support via email and our online community so that students get the help they need even outside of their mentor calls
The Springboard team of 180 works out of offices in the heart of San Francisco and Bengaluru. We’re backed by top investors, including Costanoa Ventures, Reach Capital, Learn Capital, Pearson Ventures, and the founders of LinkedIn and Princeton Review.

Working with us, you’ll enjoy competitive compensation, health insurance coverage (for employees based in California, our base plans are fully covered by Springboard; for employees based outside of California, we offer low-premium coverage), a 401k plan, a generous learning budget, team lunches and snacks, and an opportunity to impact thousands of lives alongside a fun, dedicated and mission-driven team. To learn more about our team and culture, follow us on Instagram @springboardlife!

We are an equal opportunity employer and value diversity at our company. We welcome applications from all backgrounds, and do not discriminate on the basis of race, religion, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Share this job:
Senior Quality Engineer
senior embedded big data linux redis testing Aug 11
Are you passionate about performance?  Do you enjoy learning the ins and outs of a networked app and finding ways to make it go faster? Do you compile your own Linux or FreeBSD kernels to tweak performance to suit your goals?  Are you technical, but articulate, and excited about sharing your findings? Would you like to know that your work is contributing to a greater mission with global impact?  How would you like to do this from the comfort of your own home?
 
Dragos has an opportunity for a Senior Quality Engineer in Performance to join our growing team of talented Engineers making great contributions to our mission of Safeguarding Civilization!  As a Quality Engineer in Performance, you will help establish networked application performance metrics for our platform and document and communicate deltas and suggestions for improvements.  You will be responsible for the design and execution of tests that simulate real-world customer scenarios, as well as potential worst case scenarios, and come out of it with data and a path forward driven by that data.  In addition, as part of a larger (and growing) quality organization, you will be responsible for helping ensure team members bake performance testing into their work streams.
 
Our headquarters is in Hanover, MD and you have the flexibility of working from home or out of our office.

Responsibilities

  • Work with Agile Scrum/Kanban embedded QE Team members to define and execute performance-related tests for the platform as a whole and each subcomponent 
  • Analyze output and log files from tests with the goal of establishing baselines and measuring of deltas to proactively characterize and report on system/subsystem performance
  • Author test cases/suites in Test Rail, proactively review test cases written by other Quality Engineers
  • Communicate status of platform performance at any snapshot in time, to both technical and non-technical stakeholders

Requirements

  • Strong UNIX/Linux skills, from administrative/management perspective
  • Demonstrated expertise with and understanding of TCP/IP, including routers, switches, firewalls, and familiarity with the OSI Network Model and how it relates to Linux/UNIX components.
  • Understanding of x86 architecture, hardware/software interactions, and impacts that HW configurations may have on software performance (i.e. NUMA node optimizations, CPU core affinities, etc.)
  • Organized, articulate, and a team-player
  • Prefer Strong experience IXIA, T-REX, or Spirent, for traffic generation, 
  • Prefer Experience with administration, monitoring and tuning of big data application stacks and pipelines, i.e. Elastic Search, Mongo, Nifi, Redis, RabbitMQ

Performance Objectives

  • 30 days: Have a basic understanding of Dragos’s platform, dependencies, and knowledge of how the Quality Practice works at Dragos
  • 90 days: Be able to autonomously conduct continued performance evaluations and provide input and suggestions on configuration improvements
  • 180 days: Proactively sends reports to interested parties and able to answer questions from a technical and nontechnical standpoint
  • 365 days: Seen as SME in performance, proactively finding areas in platform for driving and improving performance, working with Product on Engineering to evaluate and incorporate these changes.
Our mission at Dragos is to protect the world’s most critical infrastructure from adversaries who wish to do it harm. We help defend industrial organizations that provide us with the tenets of modern civilization: running water, functioning electricity, and safe industrial working environments.
 
We are practitioners who have lived through and solved real security challenges. Our team members have responded to incidents including the Ukraine 2015 power grid attack, analyzed the CRASHOVERRIDE malware responsible for the Ukraine 2016 electric grid attack, analyzed the TRISIS malware responsible for the petrochemical facility attack in 2017, built and led the National Security Agency mission to identify nation-states breaking into ICS, and performed assessments on hundreds of assets around the world.
 
We offer competitive salaries, equity, and a comprehensive benefits package including medical, dental, vision, disability, 401K and life insurance.
 
Dragos is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse workforce. Come join us!
Share this job:

Last Week

Partner Manager, Australia & New Zealand
Dataiku  
manager data science big data Aug 10
Dataiku is allowing enterprises to create value with their data in a human-centered way while breaking down silos and encouraging collaboration. One of the most unique characteristics of our product, Data Science Studio (DSS), is the breadth of its scope and the fact that it caters both to technical and non-technical users. Through DSS, we aim to empower people through data and democratize data science.

Dataiku is looking for a Partner Manager based in our Sydney office to join our Partnerships organisation to manage partners in Australia and New Zealand. Since the launch of our company, we have experienced healthy growth and steady market demand. This is a new position due to rapid  business expansion.  The successful candidate will be responsible for managing existing partner relationships and forming new partnerships with organizations that can help us deliver successful joint solutions to our customers and grow our presence in the region.  

We’re looking for someone who can express passion about new technologies and the business benefits of AI and who can tie the value of  Dataiku Data Science Studio (DSS) to the offers of our partners, from both a product and GTM perspective.  The Partner Manager will engage with ecosystem partners - system integrators, consultants, value-added resellers and complementary technology partners - at the executive and field levels to develop solution offerings that deliver value for our joint customers. 

The Partner Manager is responsible for recruiting, onboarding, and enabling partners and managing go-to-market programs and field execution with our sales teams to exceed revenue targets and drive customer success. Dataiku is a fast-growing company with great momentum and the Partner Manager will need to work cross-functionally with Sales, Marketing, Services, and other departments to drive sustainable company growth.

You might be a good fit for the role if you have:

  • 5+ years experience in developing partnerships and revenue-generating partner relationships
  • Experience in “data” – big data, analytics, data science, BI/DW, data integration, AI, etc.
  • Strong belief in a customer-centric selling philosophy and applying a consultative approach in customer/partner interactions
  • Background in positioning innovative solutions
  • Strong ability to develop joint value propositions with partners and to cultivate champions within partner organizations
  • Ability to articulate competitive positioning and differentiation
  • Strong presentation, negotiation, and business planning skills
  • Desire to work in a fast-paced, collaborative environment with the ability to adapt to change
  • Travel up to 30% 
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. Spanning from Sydney to Frankfurt, Denver to London, geography doesn’t stop Dataikers from working closely together and sharing experiences. Collaboration is key within our product and culture. We strive to create a sense of belonging and community while fostering diverse thinking by encouraging cross-team, cross-office interactions like our annual company offsite or Paris onboarding. Fly over to Twitter, LinkedIn, and Instagram to read stories about our culture, people, and success. 
Share this job:
Sr. Data Scientist
python scala data science machine learning big data testing Aug 06

Senior Data Scientist @ NinthDecimal.

NinthDecimal (www.ninthdecimal.com) provides location-based intelligence to help advertisers plan, manage, measure, and optimize multi-platform cross-media campaigns to drive customer and revenue growth. As an industry leader in the AdTech & MarTech space, NinthDecimal delivers best-in-class measurement, insights, and analytics by deploying patented big data methodologies on a cutting-edge technology platform.

Our LocationGraph™ platform processes data on a massive scale, converting tens of billions of signals per day into accurate and actionable insights for our clients. We provide location-based intelligence services for top brands across industry verticals including retail, travel, leisure & entertainment, fast food & casual dining, telecommunications, and automotive.

As a member of the Data Science team, you’ll be responsible for developing statistical and machine-learning models that deliver accurate and robust measurement metrics of interest to our advertising clients. You will work closely with other data scientists, data analysts, product & engineering teams, and other business units. This is a great opportunity to work with real world data at scale and to help define and shape the measurement standards in a very dynamic and evolving industry.

Responsibilities:

  • Develop & deploy statistical & machine learning models at scale to create high quality disruptive products
  • Contribute to our growing portfolio of data science and technology patents
  • Establish robust processes to insure the accuracy, stability, reproducibility, and overall quality of all data, algorithms, and the results they produce.
  • Represent Data Science team in product and roadmap design sessions
  • Participate in building reliable QA processes for both data and results
  • Collaborate on key architectural decisions and design considerations
  • Contribute to and promote good software engineering practices across the Engineering Department.
  • Understand the current data sets and models and provide thought leadership by discovering new ways to enrich and use our massive data assets

Qualifications Required:

  • A true passion for data, data quality, research and a solid data science approach
  • Masters or Ph.D.in Statistics, Economics, Operations Research, or similar quantitative field
  • At least 5 to 10 years of professional experience with clear career progression and demonstrated success at developing models that drive business value
  • Excellent communication skills and the ability to present methodologies and findings to audiences with varying technical background
  • Solid understanding of probability and statistics
  • Solid understanding of research design, A/B and test-vs-control statistical testing frameworks
  • Solid understanding of unsupervised and supervised machine learning approaches including clustering and classification techniques.
  • Experience in building Machine Learning models (GLM, SVM, Bayesian Methods, Tree Based Methods, Neural Networks)
  • Solid understanding of how to assess the quality of machine learning models – including the ability to tune and optimize models and to diagnose and correct problems.
  • Experience working with multiple data types including numerical, categorical, and count data.
  • A driven leader, able to manage competing priorities and drive projects forward in a dynamic and fast paced business environment.
  • Experienced/Advanced programmer in Scala, Python, or similar programming languages
  • Experienced/Advanced Programmer in Spark, SQL, and Hadoop
  • Experience in developing algorithms and building models based on TB-scale data
  • Familiarity with the digital media / advertising industry is a big plus
Share this job:

This Month

Cloud Support Engineer
 
cloud big data linux Jul 30
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling new and exciting challenges head-on. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing.  You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

As part of the Cloud support team, you will directly respond to questions from our customers on areas such as connectivity, the availability of Couchbase Cloud along with questions on Couchbase and Couchbase Cloud features. You will work closely with our product and engineering teams to escalate complex problems as appropriate. Timely response and proactive communication with the customer to keep them informed and answer their questions in a timely manner is extremely crucial.

Note: Wish to work from home? Not a problem - This role can be remote and located anywhere in the United States.

Responsibilities

  • Utilize cloud and Couchbase expertise in a customer-first mindset to help our customers throughout the world be successful
  • Have passion for technology and collaboration
  • Solve complex distributed systems and cloud challenges of today’s cloud-native database infrastructure
  • Take part in a global team of our cloud operations team across the US, UK and India
  • Handle customer support questions of various levels of difficulty utilizing live text chat with technical or non-technical customers
  • Genuinely enjoy helping others and be able to interpret their requests
  • Communication skills both written and verbal and be knowledgeable about a wide range of cloud and open source technologies

Qualifications

  • 5+ years of experience in providing cloud support and troubleshooting database problems
  • Experience administering large-scale production environments, including IaaS, and PaaS, operating systems (e.g. Linux, Windows), database software
  • Degree in Tech/Computer Science or equivalent work experience
  • Strong operating knowledge and troubleshooting skills related to cloud platforms, operating systems (Linux, Unix, Windows), networking, databases and security
  • Basic familiarity with Amazon Web Service or other cloud infrastructure platforms is required 
  • Couchbase skills are highly desired
  • Excellent communication skills both written and verbal as well as soft skills troubleshooting issues with customers
About Couchbase

Unlike other NoSQL databases, Couchbase provides an enterprise-class, multicloud to edge database that offers the robust capabilities required for business-critical applications on a highly scalable and available platform. Couchbase is built on open standards, combining the best of NoSQL with the power and familiarity of SQL, to simplify the transition from mainframe and relational databases.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace and is dedicated to pursuing, hiring and developing a diverse workforce. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Data Engineer - Cloud FinOps
 
cloud python big data finance aws Jul 27
Atlassian is continuing to hire with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a Data Engineer to join our Cloud FinOps team and build world-class data solutions and applications that power business decisions throughout the organisation. Are you a forward-thinking, structured problem solver who is passionate about working with a wide variety of people from all parts of our business? This is an opportunity to enable a world-class cloud centre of excellence as the genius behind our data transforms and delivery, helping drive how Atlassian's use cloud billing & utilisation data, application service metrics, and data models to help make service cost management a data driven process. You love thinking about the ways the business can consume this data and then figuring out how to build it.
You will have the opportunity to apply your background in on building analytics data models to support the team in a a broad range of analytical requirements. By being an ally with other organisations within the Atlassian you will help to evolve practices and supporting models to integrate them into the sources of truth used for Service Governance, Site Reliability, and Technical Education.
You will lead a problem end-to-end, so those skills will come in handy not just to collect, extract, and clean the data, but also to understand the systems that generated it. On an ongoing basis, you'll be responsible for improving the data by adding new sources, coding business rules, and producing new metrics that support the business. The requirements will be vague. Iterations will be rapid. You will need to be nimble and take smart risks.

On your first day, you will to have:

  • At least 3 years professional experience as a data engineer or in a similar role
  • Experience with solution building and architecting with public cloud offerings such as Amazon Web Services, Redshift, S3, EMR/Spark, Presto/Athena
  • Experience with Spark and Hive
  • Expertise in SQL, SQL tuning, schema design, Python and ETL processes
  • Expertise in data pipeline with such workflow tools as Airflow, Oozie or Luigi
  • Solid understanding experience in building RESTful APIs and micro-services, e.g. with Flask
  • Experience in test automation and ensuring data quality across multiple datasets used for analytical purposes
  • Experience with Lambda Architecture or other Big Data architectural best practices

We would love if you have:

  • Experience with test automation and continuous delivery
  • A graduate degree in Computer Science or similar discipline

More about the team:

  • The FinOps team drives Atlassian’s pursuit of a lean cloud cost model through cost awareness and optimisation practices. We start this financial year with 3 goals:
  • Give teams the knowledge they need to maintain an ongoing awareness of their financial footprint, and opportunities to reduce it.
  • Unify the worlds of finance and engineering via a common language
  • Optimise everything


More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Senior Software Engineer - Cloud
 
cloud senior python c embedded saas Jul 22
We’re looking for a Senior Software Engineer - Cloud with expertise in cloud-native architecture and SaaS application development to join a newly created engineering team chartered with building the next phase of our cloud services strategy from the ground-up.  This is an exciting and unique opportunity to have a major influence on the development of our service and contribute to the next phase of innovation for cloud-native databases!

This is a full stack engineering position where you’ll build and manage cloud-native applications. This engineer and team will have primary responsibility and accountability for developing, implementing and operating Couchbase's cloud platforms.  The team operates with a “run what you write” philosophy and engineers take responsibility for deploying and operating their code.

This role is also open to remote work within UK as our teams are distributed all over the world!

Responsibilities

  • Design, build, manage and operate the infrastructure and configuration of SaaS applications with a focus on automation and infrastructure as code.
  • Design, build, manage and operate the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services.
  • Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Prometheus, Grafana, ELK, Datadog, New Relic and other similar tools.
  • Experience working within an Agile/Scrum SDLC
  • Interface with Product manager and Product owner to refine requirements and translate requirements to stories and epics. 
  • Perform detailed scoping of features 
  • Integrate different components and develop new services with a focus on open source to allow a minimal friction developer interaction with the platform and application services.
  • Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
  • Evaluate performance trends and expected changes in demand and capacity, and establish the appropriate scalability plans
  • Troubleshoot and solve customer issues on production deployments
  • Ensure that SLAs are met in executing operational tasks

Qualifications

  • Experience in modern software paradigms including cloud applications and serverless architectures 
  • 6+ years writing production back-end/embedded systems code.
  • Experience with agile methodologies and ability to lead scrums
  • Experience in languages such as Go, Python, C, C++, and scripting 
  • Some experience with front-end frameworks such as React, Angular and Bootstrap a solid plus. 
  • Team lead experience, Experience working with global and remote teams 
  • Experience with a pair programming model highly desirable
  • Experience in full-stack development 
  • Cloud Infrastructure • Amazon Web Services, Google Cloud Platform, Azure 
  • Operations  - Continuous Integration and Deployment
  • Experience in modern software paradigms including cloud applications and serverless architectures 
  • Operations  - Continuous Integration and Deployment
  • MS in Computer Science or equivalent experience
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Enterprise Account Executive - Switzerland
executive c saas big data Jul 22
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Enterprise Account Executives play a key role in driving Confluent’s sales activities in region. This role includes activities developing and executing on the go-to-market strategy for your territory. The ideal candidate needs to have experience selling complex Database, Messaging, Big Data, Open Source and/or SaaS into large corporate and multi national companies.

This new role will cover the German speaking region so you will need to be based in the Zurich area.

What you will do:

  • Build awareness for Kafka and the Confluent Platform within large enterprises
  • Aggressively prospect, identify, qualify and develop sales pipeline
  • Close business to exceed monthly, quarterly and annual bookings objectives
  • Build strong and effective relationships, resulting in growth opportunities
  • Build and maintain relationships with new and existing Confluent partners

What we are looking for:

  • An ability to articulate and sell the business value of big data and the impact on businesses of all sizes
  • Deep experience selling within the Database, Open Source, Messaging or Big Data space
  • 5+ years experience selling enterprise technology in a fast-paced and competitive marketExperience selling to developers and C level executives
  • Great knowledge and network in the Swiss market
  • Highly motivated, over achiever, team player
  • Strong analytical and writing abilities
  • Exceptional presentation skills
  • Entrepreneurial spirit/mindset, flexibility toward dynamic change
  • Goal oriented, with a track record of overachievement (President’s Club, Rep of the Year, etc.)

Why you will enjoy working here:

  • We’re solving hard problems that are relevant in every industry
  • Your growth is important to us, we want you to thrive here
  • You will be challenged on a daily basis
  • We’re a company that truly values a #oneteam mindset
  • We have great benefits to support you AND your family
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

#LI-SO1
#LI-Remote

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Senior Data Engineer II
Auth0  
senior python big data aws Jul 16
Auth0 is a pre-IPO unicorn. We are growing rapidly and looking for exceptional new team members to add to our teams and will help take us to the next level. One team, one score. 

We never compromise on identity. You should never compromise yours either. We want you to bring your whole self to Auth0. If you’re passionate, practice radical transparency to build trust and respect, and thrive when you’re collaborating, experimenting and learning – this may be your ideal work environment.  We are looking for team members that want to help us build upon what we have accomplished so far and make it better every day.  N+1 > N.

The Data engineer will help build, scale and maintain the entire data platform. The ideal candidate will have a deep technical understanding, hands-on experience in distributed computing, big data, ETL, dimensional modeling , columnar databases and data visualization. The candidate should feed on challenges and love to be hands on with recent technologies.  This job plays a key role in data infrastructure, analytics projects, and systems design and development. You should be passionate for continuous learning, experimenting, applying and contributing towards cutting edge open source Data technologies and software paradigms.

What you will do:

  • Contributing at a senior-level to the data platform design by implementing a solid, robust, extensible design that supports key business flows.
  • Performing all of the necessary data transformations to populate data lake.
  • Establishing efficient design and programming patterns that beat SLAs and help easily manage the data platform.
  • Designing, integrating and documenting technical components for seamless data extraction and analysis.
  • Adopting best practices in our data systems and shared across teams.
  • Contributing to innovations and data insights that fuel Auth0’s mission.
  • Working in a team environment, interacting with multiple groups on a daily basis (very strong communication skills).+ BA/BS in Computer Science, related technical field or equivalent practical experience.
  • At least 3 years of relevant work experienceAbility to write, analyze, and debug SQL queries.
  • Exceptional Problem solving and analytical skills.
  • Experience with Data Warehouse design, ETL (Extraction, Transformation & Load), architecting efficient software designs for DW platform.
  • Hands-on experience in Python, R, Apache Spark in production environments.
  • Strong skills in Apache Airflow, Luigi or similar tools.
  • Experience in Tableau, Apache SuperSet, Looker or similar BI tools.
  • Knowledge of AWS Redshift, Snowflake or similar databases
Auth0’s mission is to help developers innovate faster. Every company is becoming a software company and developers are at the center of this shift. They need better tools and building blocks so they can stay focused on innovating. One of these building blocks is identity: authentication and authorization. That’s what we do. Our platform handles 2.5B logins per month for thousands of customers around the world. From indie makers to Fortune 500 companies, we can handle any use case.

We like to think that we are helping make the internet safer.  We have raised $210M to date and are growing quickly. Our team is spread across more than 35 countries and we are proud to continually be recognized as a great place to work. Culture is critical to us, and we are transparent about our vision and principles

Join us on this journey to make developers more productive while making the internet safer!
Share this job:
Senior Software Engineer - Go
 
senior golang python c embedded saas Jul 16
The Senior Software Engineer - Golang is a backend engineer responsible for building and managing cloud-native applications. This role will have the primary accountability of developing, implementing and operating Couchbase’s Cloud platforms. The team operates with a “run what you write” philosophy and each engineer is responsible for deploying and operating the code they write.

A successful candidate must have demonstrable experience in at least one programming language, previous work in SaaS application development and operations. The ideal candidate will also have prior experience developing applications on either of the three major cloud platforms -  AWS, Azure, and GCP.

This role is also open to remote work within UK as our teams are distributed all over the world!

Responsibilities

  • Design, build, manage and operate the infrastructure and configuration of SaaS applications with a focus on automation and infrastructure as code.
  • Design, build, manage and operate the infrastructure as a service layer (hosted and cloud-based platforms) that supports the different platform services.
  • Develop comprehensive monitoring solutions to provide full visibility to the different platform components using tools and services like Kubernetes, Prometheus, Grafana, ELK, Datadog, New Relic and other similar tools.
  • Experience working within an Agile/Scrum SDLC
  • Interface with Product manager and Product owner to refine requirements and translate requirements to stories and epics. 
  • Perform detailed scoping of features 
  • Integrate different components and develop new services with a focus on open source to allow a minimal friction developer interaction with the platform and application services.
  • Identify and troubleshoot any availability and performance issues at multiple layers of deployment, from hardware, operating environment, network, and application.
  • Evaluate performance trends and expected changes in demand and capacity, and establish the appropriate scalability plans
  • Troubleshoot and solve customer issues on production deployments
  • Ensure that SLAs are met in executing operational tasks

Qualifications

  • Experience in languages and infrastructure such as Go, Python, C, C++, and scripting is mandatory
  • Experience in modern software paradigms including cloud applications and serverless architectures 
  • 7+ years writing production back-end/embedded systems code.
  • Experience with agile methodologies and ability to lead scrums 
  • Some experience with front-end frameworks such as React, Angular and Bootstrap a solid plus. 
  • Team lead experience. Experience working with global and remote teams 
  • Experience with a pair programming model highly desirable
  • Experience in full-stack development 
  • Cloud Infrastructure: Amazon Web Services, Google Cloud Platform & Azure 
  • Experience in modern software paradigms including cloud applications and serverless architectures 
  • MS in Computer Science or equivalent experience
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase has offices around the globe, and we’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Equal Opportunity Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, or disability.
Share this job:
Solutions Architect - Toronto
java python scala big data linux cloud Jul 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Toronto with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Integration Architect
big data docker Jul 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

The mission of an Integration Architect is to develop technical content that teaches developers and operators how to build event streaming applications based on Apache Kafka. The goal is to increase adoption by demonstrating first-class user experience with end-to-end solutions.

As part of the Developer Relations team at Confluent, an Integration Architect designs, develops, and validates solutions, with an emphasis on content for developers new to Kafka, like basic event streaming concepts, as well as more advanced topics for users going to production with Confluent. These very technical and “how to” artifacts include:Hands-on examples, tutorials, and code showcasing event streaming applicationsEducational materials: short-form curriculumWriteups: whitepapers, blog posts, validated reference architectures, documentation

In addition to developing technical content, an Integration Architect is also responsible for providing feedback into our Engineering and Product Management teams by writing friction logs, filing software bugs, identifying feature gaps, proposing usability enhancements

What we’re looking for:

  • Experience developing stream processing applications and operating data systems
  • Enjoyment in creating technical, user-facing documentation, white papers, or training material
  • Experience in various programming languages, automation tooling, and Linux
  • Sharp focus on user experience
  • Proven ability to set own priorities, work cross-functionally, investigate technical problems, and figure out new things, with little or no direct daily supervision
  • Travel: 1-2 times per year to HQ and/or conferences

What gives you an edge:

  • Previous experience building solutions that use Apache Kafka, related stream processing frameworks, or message queues
  • Previous experience working with container technologies like Docker or Kubernetes
  • Java software development
  • Databases, big data tools
#LI-MT1

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:

This Year

Technical Writer Developer Documentation
 
java python javascript big data cloud dot net Jul 09
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling new and exciting challenges head-on. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing.  You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Documentation at Couchbase is a crucial component of the product experience. Couchbase documentation enables developers and system administrators to effectively use Couchbase in their environment.
 
As a Technical Writer for Developer documentation, you are passionate about explaining complex technologies in useful ways for the target audience. You are naturally curious about software and technology, and learn complex technologies relatively quickly. You have an engineering background and mindset that’ll help you stand in the target user’s (developers) shoes. You wield strong interpersonal skills and work well with a global team of technical writers and stakeholders (product managers, engineers, quality engineers, and support engineers) on a daily basis. 

Responsibilities

  • Create new content and update existing developer documentation content.
  • Add examples and tutorials to demonstrate how to use various product features effectively, ensuring that the samples take best practices into account. 
  • Engage with developers to gather feedback on developer content, and iterate quickly and frequently to keep the content relevant and up-to-date.  
  • Write clearly in English, using a simple, conversational tone. 

Preferred Qualifications

  • Engineering background – been an engineer, developer, developer-advocate, or a similar role in the past 
  • Learn complex technologies relatively quickly.
  • Code in at least one of Java, Node.js, .NET, Python and JavaScript languages

Minimum Qualification

  • Bachelor's degree or equivalent practical experience.
  • Experience in technical writing, product documentation, or online publishing.
  • Understand code, and be familiar with at least one of Java, Node.js, .NET, Python and JavaScript languages.
  • Experience creating and/or modifying, and executing samples.
  • Experience with text-based authoring with Asciidoc or Markdown, docs-as-code philosophy, and collaborative authoring practices.
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Cloud Support Engineer
 
cloud big data linux Jul 07
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling new and exciting challenges head-on. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing.  You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

As part of the Cloud support team, you will directly respond to questions from our customers on areas such as connectivity, the availability of Couchbase Cloud along with questions on Couchbase and Couchbase Cloud features. You will work closely with our product and engineering teams to escalate complex problems as appropriate. Timely response and proactive communication with the customer to keep them informed and answer their questions in a timely manner is extremely crucial.

Responsibilities

  • Utilize cloud and Couchbase expertise in a customer-first mindset to help our customers throughout the world be successful
  • Have passion for technology and collaboration
  • Solve complex distributed systems and cloud challenges of today’s cloud-native database infrastructure
  • Take part in a global team of our cloud operations team across the US, UK and India
  • Handle customer support questions of various levels of difficulty utilizing live text chat with technical or non-technical customers
  • Genuinely enjoy helping others and be able to interpret their requests
  • Communication skills both written and verbal and be knowledgeable about a wide range of cloud and open source technologies

Qualifications

  • 5+ years of experience in providing cloud support and troubleshooting database problems
  • Experience administering large-scale production environments, including IaaS, and PaaS, operating systems (e.g. Linux, Windows), database software
  • Degree in Tech/Computer Science or equivalent work experience
  • Strong operating knowledge and troubleshooting skills related to cloud platforms, operating systems (Linux, Unix, Windows), networking, databases and security
  • Basic familiarity with Amazon Web Service or other cloud infrastructure platforms is required 
  • Couchbase skills are highly desired
  • Excellent communication skills both written and verbal as well as soft skills troubleshooting issues with customers
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Solutions Architect - West Coast
java python scala big data linux cloud Jul 01
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
Anywhere in West Coast, USA
You will be based in LOCATION, with 60-75% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
SRE for Managed AI Cloud Platform
Dataiku  
cloud saas big data docker Jun 08
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Site Reliability Engineer (SRE) to join its SaaS team developing and operating the Dataiku managed offering. 

The role consists of working on a large variety of tasks split between operation and development. As an SRE, you  are responsible to build, automate, deploy  and operate a reliable, secure  and cost-efficient infrastructure to support the Dataiku SaaS offering. 

This role is an opportunity to be an early member of a small team launching an exciting new project, with a strong and direct impact on the final outcome.  In this role, you will get your hands on the most promising cloud technologies and receive valuable mentorship from experts from our core team.
 
The position is either remote or at the company Paris office. (Gare de Lyon)  

Responsibilities:

  • Design, deploy and maintain a cloud infrastructure to support a Dataiku SaaS offering 
  • Continuously improve the infrastructure, deployment and configuration to deliver more reliable, resilient, scalable and secure services
  • Automate as much as possible all technical operations 
  • Troubleshoot cloud infrastructure, systems, network, and application stacks
  • Setting up monitoring, logging and tracing tools to detect and fix any potential issues

Requirements:

  • Hands-on expertise working with Docker and Kubernetes
  • Strong experience leveraging cloud resources from different providers
  • Hands-on experience with Infrastructure as code tools (Terraform/Ansible, Helm.)
  • Knowledge of distributed systems like Hadoop and Spark
  • Solution-oriented and automation first mindset

Bonus points for any of these:

  • Experience in working in as SRE for a SaaS company
  • Knowledge in DataScience, AI and Machine Learning

Benefits:

  • Equity
  • Attending and presenting at online big data conferences
  • Online yoga classes and after works 
  • Possibility to work from home or come to our Paris office
  • Free food and drinks in the office
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Senior Data Architect
etl design database graphdb neo4j senior Jun 06

Lovevery is a fast-growing digitally native brand co-founded by successful serial entrepreneurs and based in Boise, Idaho. Our customers are parents, and our mission is to help them feel confident they are giving their children meaningful development experiences in the critical early years of life.

Our customers have been asking for us to deliver a digital product experience that is as meaningful as the physical product experience they already get from Lovevery. We are looking for an experienced Data Architect with graph database experience who is passionate about building systems to efficiently store and retrieve large amounts of dynamic data as well as optimizing our existing ETL and data processes.

The following attributes in a candidate are paramount:

  • Architectural (Data) Vision – Lovevery’s co-founders and investors have big plans for how the brand can help families digitally, and it is up to you to architect and deliver the data pipeline and storage elements of the platform to achieve the company’s vision. You will do it all in a way that lays the groundwork for a scalable offering of digital products to create an extraordinary and unmatched experience for our users.
  • Pragmatism – You will be building new digital products in this role, and you will also be rearchitecting eCommerce, data and analytics platforms, stitching together solutions to create the best experience in the most pragmatic way. You must have the ability to look beyond the current technology stack to get things done.
  • Humble Hustle - This role requires both vision and the ability to execute. It is not just designing systems but also being a part of the implementation team.
  • Quality-Focused - You will be a champion for building things in a robust and scalable way and developing a test suite to prevent regressions that will impact customers' experience with Lovevery.
  • Business-Minded - You will define the ideal technical solution and at the same time articulate the building blocks and roadmap to that solution while working with the broader digital product team to articulate business trade-offs for each roadmap item.
  • Strong collaboration - We’ve built a world-class team with passionate individuals who work well together and we only want to make the team even better. You must be able to communicate, collaborate, and work well with others from all different functions within the organization.

We are obsessed with giving families the best products and content for early childhood, and this is an important role in making that happen at Lovevery. This role will report directly to the Senior Director of Digital Products and will grow with the company as the team scales. This is a strategic position for us, and we are only interested in top candidates who can grow with the business and can serve as a technical leader as the company grows.

If you are an experienced Data Architect with graph database experience and you have a passion for early childhood, you could be amazing in this role and be part of something special.


Accountabilities:

  • Design, architect, refine, optimize and deliver a cloud-based data architecture to efficiently store and retrieve large amounts of dynamic data as well as optimize our existing ETL and data processes. The data platform architecture is a critical aspect of the entire digital team’s delivery of the product vision, roadmap, strategy and analytics.
  • Work with the analytics team and business stakeholders to understand data requirements and current use cases, with a continual drive towards providing clearer and more accurate and actionable data.
  • Recommend solutions to improve new and existing database systems as well as make technical recommendations on make vs buy decisions.
  • Mentor and pair with mid-level and junior software engineers in order to develop their skills and understanding of the data architecture, platform, code, programming languages and frameworks employed within the Lovevery platform.
  • Lead efforts to migrate data from legacy systems to new solutions.
  • Design conceptual and logical data models and flowcharts.
  • Improve system performance by conducting tests, troubleshooting, and integrating new elements.
  • Optimize new and current database systems.
  • Define security and backup procedures.
  • Design and implement data storage and removal procedures to meet GDPR, CCPA and other data privacy requirements.
  • Work closely with and collaborate cross-functionally with software engineers, designers, QA, Product Managers and Business functions to build, evolve and optimize needed systems and platforms.

Qualifications and attributes:

  • You think parenthood and early childhood are really inspiring things to work on and you have empathy for parents as users of digital products.
  • You have a proven track record as a top performer for 5+ years in a similar role, preferably in a consumer internet business, digital community business, or consumer products business with a digital experience.
  • You are interested in and have experience with cloud-based data processing and storage technologies.
  • You have experience working with mobile apps clients.
  • You have experience with eCommerce data analytics and ensuring clean eCommerce data for analysts out of solutions such as Shopify or Google Analytics.
  • You have experience with cloud Big Data tools and platforms such as AWS Redshift, Athena, Datalakes, Clickhouse, Snowflake.
  • You have built, extended, and automated ETL data pipelines.
  • You have experience with Google Analytics 360 and BigQuery.
  • You have a Bachelor’s degree in Computer Science or a related field.
  • You have 2+ years of experience with Graph DB (Neo4J experience would be advantageous).
  • You have experience working with Kafka or another stream processing platform.
  • You have excellent design, organizational and analytical abilities.
  • You are an outstanding problem solver.
  • Your written and oral communications are clear, concise, and thorough.
  • You have experience coding in Java, Python or Ruby and have a passion for writing code.
  • You have an Intrinsic humble hustle.
  • You have a “can do” attitude who comes with proposals for solutions rather than just identifying problems.

 Compensation:

  • Competitive salary, benefits, and stock options package
Share this job:
Accounts Receivable and Collections Specialist
 
big data cloud Jun 03
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling new and exciting challenges head-on. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing.  You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

We are looking for a Full-time Accounts Receivable and Collections Specialist to work overnight in Bangalore during US standard business hours. The position will be remote, based out of your home office and reporting to the Order to Cash Lead. In this role, you will help manage the accounts receivable process that includes generating invoices, keeping track of payments received and following up with Couchbase’s customers as necessary. Other responsibilities will include helping with the day to day revenue transactions (such as generation of sales orders), providing suggestions for process improvements, working cross-functionally with Sales Operations and General Ledger Accounting teams, and providing PBCs to external audit support.

Responsibilities

  • Prepare, verify, and process sales orders & invoices using NetSuite/Salesforce
  • Maintain up-to-date billing system through timely generation of invoices from NetSuite and sending over to the customers
  • Manage cash collection/application for multiple currency accounts
  • Monitor customer account details and appropriate follow-up for non-payments, delayed payments, and other irregularities via email and phone
  • Review customer contracts to ensure they adhere to company policies
  • Reviewing open accounts for collection efforts
  • Making outbound collection calls in a professional manner while keeping and improving customer relations
  • Resolve client-billing problems and rescues accounts receivable delinquency.
  • Generate and distribute monthly account statements to customers.
  • Connect with customers to arrange payment and/or resolve issues preventing payment
  • Manage day-to-day collections in a high-volume transaction environment.
  • Enforce payment terms and drive improvement in the Company’s Days Sales Outstanding (DSO)
  • In collaboration with the Sales team, reconcile customer disputes as they pertain to payment of outstanding balances that are due
  • Assist in policy enforcement and process improvements
  • Post customer payments by recording checks, credit card payments and ACH/Wire transfers in compliance with financial procedures and policies
  • Vendor set up for new customers (Supplier forms, banking info, W9)
  • Support monthly, quarterly and annual accounting close responsibilities
  • Ad-hoc special projects, and other requests as needed
  • Monitor vendor accounts for credit balances (misapplied cash and credit memos) and resolve by conducting thorough research and processing through to appropriate resolution

Preferred Qualifications

  • Previous experience with a software company
  • Understanding of software revenue recognition accounting
  • Hands-on experience in operating MS-Office suite with previous experience working on NetSuite, Salesforce, Blackline and/or RevSym 
  • Attention to detail and superior organization skills 
  • Ability to function in a fast-paced, team-based environment

Minimum Requirements

  • Must be willing to work from home during standard US business hours
  • Bachelor’s degree in Accounting or Finance
  • Solid understanding of basic accounting principles
  • Must have strong analytical, organizational, verbal, written and communications skills.
  • Proven work experience of minimum of 2 years as Accounts receivable accountant, including past due account collections experience
  • Strong computer skills including MS Office (particularly Excel) required.
  • Previous cash collection experience
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase has offices around the globe, and we’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Equal Opportunity Statement:
All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, age, or disability.
Share this job:
Associate Solutions Architect
java python scala big data linux cloud Jun 03
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in North East (Remote) with 60-70% travel

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product features
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Senior Partner Solution Architect
 
senior java python javascript big data ios May 20
We work with the world’s biggest enterprise customers focused on leading a revolution to transform their organizations to take advantage of the digital economy. The list includes Amadeus, Concur, eBay, GE, LinkedIn, and many others. You can learn more here: www.couchbase.com/customers

Are you an individual who is customer focused, innovative, solution oriented and enjoys working with technology partners and global SIs? If so, read on. Couchbase is looking for a talented Senior Partner Solution Architect with expertise in databases, big data, cloud and/or mobile technologies to support our product and partner organization. This position will cover a variety of exciting technologies, including big data, mobile, IoT, containers & orchestration, DevOps and cloud technology ecosystem partners.

Responsibilities

  • Working with partners to create technical integrations and/or end-to-end solutions between their products and Couchbase. Examples include: Red Hat/IBM, Grafana Labs and Prometheus, Informatica, Confluent/Kafka, Databricks/Spark, Elasticsearch, VMware/Pivotal/Spring, and public Cloud providers
  • Assisting our customers to deploy partner integrations and solutions
  • Supporting our direct sales teams when they leverage partner solutions at customers
  • Creating technical and marketing collateral describing partner integrations and/or solutions
  • Developing and delivering exceptional company/product presentations and demonstrations 1:1 and 1:many
  • Working with product management and engineering to drive enhancements to the product 
  • Engagement of the partner community through attendance at technology events, writing blog posts and delivering presentations at trade shows
  • Partner education and maximizing Couchbase’s success through effective coaching and product positioning

Requirements

  • 10+ years working in a customer facing position such as presales, post-sales or consulting
  • 10+ years experience in traditional RDBMS or NoSQL databases, including data modeling. Direct exposure to Couchbase, Cassandra, MongoDB, Aerospike, Redis and Hadoop/HBase is preferable, but not required
  • 10+ years experience with Linux, Windows and their ecosystems, including Bash, Python and GitHub
  • Familiarity with programming languages such as Go, Python, Javascript, Java, .NET or Objective C
  • Bachelor or Master's degree in Computer Science or a related field
  • Strong communication and presentation skills with an ability to present complex solutions concisely 1:1 and to a large audience
  • Fluency in speaking to the full range of IT stakeholders including the IT Director / CIO level
  • Enthusiastic and knowledgeable about some established and emerging trends across the cloud ecosystem. 
  • Continuously learning about exciting new technologies like Kubernetes, Apache Camel, Prometheus, AWS Lambda, OpenWhisk, Kafka, Spark, Quarkus, and Spring Data, among other Cloud Native Computing Foundation projects
  • Passionate about the mobile and IoT ecosystem, including Android, iOS, field gateways and distributed systems with intermittent connectivity
  • Good knowledge of data center architecture covering multi datacenter and global deployments
  • Organized and analytical, able to thrive under pressure with outstanding time management skills
  • Creative and adaptive approach to removing obstacles and accelerating the integration efforts
  • Ability to travel to both partner and customer sites 25% or more
About Couchbase

Couchbase's mission is to be the platform that accelerates application innovation. To make this possible, Couchbase created an enterprise-class, multi-cloud NoSQL database architected on top of an open source foundation. Couchbase is the only database that combines the best of NoSQL with the power and familiarity of SQL, all in a single, elegant platform spanning from any cloud to the edge.  
 
Couchbase has become pervasive in our everyday lives; our customers include industry leaders Amadeus, AT&T, BD (Becton, Dickinson and Company), Carrefour, Comcast, Disney, DreamWorks Animation, eBay, Marriott, Neiman Marcus, Tesco, Tommy Hilfiger, United, Verizon, Wells Fargo, as well as hundreds of other household names.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Software Engineering ArchitectCharitable Donations & Payments
api python php jenkins java big data May 20

We are Givelify®, where fintech meets philanthropy. We help people instantly find causes that inspire them to action so they can change the world—one simple, joyful gift at a time. 

The Software Engineering Architectis tasked with developingbuilding payment systems on scale. At our core we enable our Donors give to causes and organizations they are most passionate about. You will build systems that securely facilitate movement of money through the credit, debit and ACH networks. You will build merchant on-boarding, verification,KYC,reporting systems. You will help develop and implement financial fraud detection systems.  

Some of the meaningful work you will perform:

  • Build payment systems on scale.Build systems that can helpwith the movement of money through the credit, debit and ACH networks.Build merchant on-boarding, verification, KYC, reporting systems. Assist in the development and implementation of financial fraud detection systems.
  • Write software that collects and queries data, compose queries for investigation and analysis.Collect a lot of data in real time from our applications and compose ad hoc queries, which is necessary to develop and support our products.
  • Architecting andbuilding APIs and libraries that otherproducts and engineers will consume
  • Participate in and guide engineer teams on all things technical – Architecture definition & Design ownership that not only include technology but Data Security aspects, Deployment & Cloud strategy, CI/CD, as well as coding best practices.
  • Understand our codebase and systems and the business requirements they implementto effectively make changes to our applications and investigate issues.
  • Serve as an effective communicatorwho can inform, explain, enable, teach, persuade, & facilitatediscussion, white boarding, & other collaboration platforms.
  • Effectively collaborate and share ownership of your team’s codebase and applications.Fully engage in team efforts, speak up for what you think are the best solutions, and be able to converse respectfully and compromise when necessary.

 We welcome your experience and talents:

  • BS/MS/PHDdegree in Computer Science, Computer Engineering,Mathematics, Physicsor equivalent Fintech work experience
  • 7+ years of building payment processingand KYC systems that connect with API’s from major payment acquirersand KYC service providers
  • Experience Building Web-servicesand API developmentAPIs for engineers
  • Technical Leader with at least 10+ years of work in Software Engineering
  • Strong object-oriented design and development skills and advanced knowledge of PHP, Python, Java or similar programming languages
  • Familiarity working in Agile/Scrumenvironments
  • Familiarity with DevOps configuration tools (Git,Jira, Jenkins, etc.)
  • Strong SQL composition skills. Knowledge of big data and NoSql databases is a plus!
  • A distinguished member of engineering community, either through extracurricular activities, publications, associations with orgs like IEEE

Our People 
 
We are a virtual team of high-performing professionals who innovate & collaborate to fulfill our mission to help people instantly find causes that inspire them to action so they can change the world – one simple, joyful gift at a time. Our culture of integrity, heart, simplicity, & that "wow" factor fuel our aspiration to be among the tech industry's most inclusive & purpose-driven work environments. 
 
We take great pride in providing competitive pay, full benefits, amazing perks, and most importantly, the opportunity to put passion & purpose to work. 
 
Our Product 
 
From places of worship to world-changing nonprofit groups, Givelify harnesses the power of technology to bridge the gap between people and the causes they care about. Tap. Give. Done. Givelify's payment solution is designed to make the experience of giving as beautiful as the act of giving. 
 
Learn more about us at https://careers.givelify.com ( https://careers.givelify.com/

Share this job:
Fraud Analyst - Fiat Team
Binance  
blockchain big data finance May 13
Please note, all positions at Binance require relevant experience. Applications without required experience will not be considered.

Binance is the global blockchain company behind the world’s largest digital asset exchange by trading volume and users, serving a greater mission to accelerate cryptocurrency adoption and increase the freedom of money for people around the world.

Are you looking to be a part of one of the most influential companies in the blockchain industry and contribute to the crypto-currency revolution that is changing the world?

Binance’s Fiat team is responsible for expanding global fiat initiatives for Binance, building and investing in the bridges that allow users from the traditional financial ecosystem to access crypto. We do this through building local fiat exchanges, seeking partnerships with banks and payment platform for servicing flat as well as integrating strategic investments, JVs and acquisitions.

Job Scope

This role is responsible for overseeing day to day fraud management activities. Conducting fraud monitoring of Binance customers using internal and external systems to reduce fraud related losses and fines from inadequate fraud risk management.

This role will work closely with product and big data team on fraud prevention with tailored risk models, rules and plans, and assist customers in chargeback (dispute), fraud monitoring programs to mitigate losses and scheme sanctions.

This position can be located in Asia, Europe 

Responsibilities

  • Maintain and monitor fraud management rules and strategies for credit cards and payment processors
  • Monitor and review suspected fraud and work with both internal and external stakeholders to conduct investigations and take appropriate actions
  • Monitor chargeback performance and provide early warning of Binance at risk of entering scheme chargeback and fraud monitoring program
  • Assist customers in case of fraud attack and provide follow up action plans
  • Work with 3rd party risk and fraud management platforms and support with development teams to create our own tools and solutions where needed
  • Support Fraud Manager to maintain and improve Binance fraud management policies, procedures and manuals

Requirements

  • 3-5 years of fraud management experience at payment processor, acquirer, bank or e-commerce platform
  • Experience in configuring and analysing results from risk and fraud management systems, strong data analytical and quantitative skills
  • Experience in dealing with card scheme, acquirer and 3rd party vendors
  • Familiar with card scheme rules, such as chargeback (dispute) and fraud monitoring program and local regulatory requirement
  • Experience in chargeback and fraud prevention with regards to cryptocurrency is a plus
  • Language: Fluent English is a must. Chinese is also a plus
  • Attention to detail and accuracy
  • Proactive, strong prioritisation and execution skills
  • Effective integration with different departments
  • Self-motivated and a good team player
Conditions
• Do something meaningful; Be a part of the future of finance technology and the no.1 company in the industry
• Fast moving, challenging and unique business problems
• International work environment and flat organisation
• Great career development opportunities in a growing company
• Possibility for relocation and international transfers mid-career
• Competitive salary
• Flexible working hours, Casual work attire
Share this job:
Senior Data Engineer
 
senior big data cloud aws May 12
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

JOB DUTIES: 

BUILD SOFTWARE SOLUTIONS USING PUBLIC CLOUD OFFERINGS, BIG DATA PROCESSING, AND STORAGE TECHNOLOGIES TO DEVELOP WORLD-CLASS DATA SOLUTION THAT POWERS CRUCIAL BUSINESS DECISIONS THROUGHOUT THE ORGANIZATION. COLLECT, EXTRACT, CLEAN, AND ORGANIZE DATA ACROSS MULTIPLE DATASETS BASED ON DEEP UNDERSTANDING OF BIG DATA CHALLENGES AND ECO-SYSTEM. MANAGE PROCESS TO MAKE DATA ACTIONABLE BY UTILIZING SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA. OWN PROBLEM END-TO-END BY MAINTAINING AN UNDERSTANDING OF THE SYSTEMS THAT GENERATES DATA AND AUTOMATES THE ANALYSES AND REPORTING BASED ON EXPERTISE IN DATA PIPELINE WITH SUCH WORKFLOW TOOLS AS AIRFLOW, OOZIE OR LUIGI. DESIGN MARKETING DATA INFRASTRUCTURE BASED ON THE INFORMATION ARCHITECTURE OF COMPANY’S WEBSITE BASED ON EXPERIENCE WITH SPARK AND HIVE. IMPROVE DATA BY ADDING NEW SOURCES, CODING BUSINESS RULES, AND PRODUCING NEW METRICS THAT SUPPORT THE BUSINESS BASED ON TEST AUTOMATION AND CONTINUOUS DELIVERY WHILE ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES. UTILIZE KNOWLEDGE OF SQL, QUERY TUNING, SCHEMA DESIGN, ETL PROCESSES, TEST AUTOMATION, CONTINUOUS DELIVERY, CONTINUOUS INTEGRATION, AND SOURCE CONTROL SYSTEM SUCH AS GIT. POSSESS SOLID UNDERSTANDING EXPERIENCE IN BUILDING RESTFUL APIS AND MICROSERVICES, E.G. WITH FLASK.

MINIMUM REQUIREMENTS: 

MASTER’S DEGREE IN COMPUTER SCIENCE OR RELATED FIELD OF STUDY PLUS TWO (2) YEARS OF EXPERIENCE IN DATA ENGINEERING WITH TEST AUTOMATION AND CONTINUOUS DELIVERY, ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES, SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA, LAMBDA ARCHITECTURE OR OTHER BIG DATA ARCHITECTURAL BEST PRACTICES AND WITH SPARK AND HIVE.

ALTERNATE REQUIREMENTS: 

BACHELOR’S DEGREE IN COMPUTER SCIENCE OR RELATED FIELD OF STUDY PLUS FIVE (5) YEARS OF PROGRESSIVE EXPERIENCE IN DATA ENGINEERING WITH TEST AUTOMATION AND CONTINUOUS DELIVERY, ENSURING DATA QUALITY ACROSS MULTIPLE DATASETS USED FOR ANALYTICAL PURPOSES, SOLUTION BUILDING AND ARCHITECTING WITH PUBLIC CLOUD OFFERINGS SUCH AS AMAZON WEB SERVICES, REDSHIFT, S3, EMR/SPARK, PRESTO/ATHENA, LAMBDA ARCHITECTURE OR OTHER BIG DATA ARCHITECTURAL BEST PRACTICES AND WITH SPARK AND HIVE.

TRAVEL REQUIREMENTS: 

UP TO 10% DOMESTIC TRAVELS
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.

Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.
Share this job:
Software Engineer - Data Platform
 
big data cloud May 11
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Are you passionate about data platforms and tools? Are you a forward-thinking, structured problem solver who is passionate about building systems at scale? Do you understand data tools, know how to use them, and want to help our users to make data actionable? If so, this role with our team at Atlassian is for you.

We are looking for a Software Engineer to join our Data Platform Team and build a world-class data solution that powers crucial business decisions throughout the organization.

You will partner with analytical teams, data engineers and data scientists across various initiatives working with them to understand the gaps, and bring your findings back to the team to work on building these capabilities.
In this role, you will be part of the Discovery and Consumption team under the Data Platform. The team focuses on improving the discoverability and trust of data. We are building frictionless data experiences to all Atlassian employees by offering different services that help to generate impactful insights, such as the Atlassian data portal, data quality framework, metrics store, and much more.

More about you
You have proven experience working with big data ecosystems (AWS is an advantage). You’ve probably been in the industry as an engineer for 2+ years and have developed a passion for the data that drives businesses. You've got industry experience working with large datasets, and you're interested in self-serve analytics platforms and tools.

On your first day, we'll expect you to have:

Deep understanding of big data challenges
Built solutions using public cloud offerings such as Amazon Web Services
Experience with Big Data processing and storage technologies such as Spark, S3, Druid.
SQL knowledge
Solid understanding and experience in building RESTful APIs and microservices, e.g. with Flask
Experience with test automation and ensuring data quality across multiple datasets used for analytical purposes
Experience with continuous delivery, continuous integration, and source control system such as Git
Experience with Python
Degree in Computer Science, EE, or related STEM field
It's great, but not required if you have:
Experience with Databricks
Experience with React

More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into the data platform, and we have dozens of teams driving their decisions and guiding their operations based on the data and services we provide.
It’s our team's job to make more Atlassian’s data-informed and facilitate R&D. We do this by providing an ambitious data platform, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team that is crazy smart and very direct. We ask hard questions and challenge each other to constantly improve our work. We're all about enabling growth by delivering the right data and insights in the right way to partners across the company.
More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Cloud Partner Solutions Engineer/Evangelist - AWS/GCP
cloud aws java big data linux May 05
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Coatue Management, Altimeter Capital and Franklin Templeton to join earlier investors Sequoia Capital, Benchmark, and Index Ventures in the recent Series E financing of a combined $250 million at a $4.5B valuation. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Confluent’s Business Development team is the voice of the company to our partners and the voice of our partners to our internal product and engineering teams. For our Cloud Partner Solutions Architect, we’re looking for a strong technologist who will grow and lead the technical relationship with our Cloud Partners.   You’ll jointly build enterprise streaming solutions that highlight Confluent’s unique features, enable cloud technical sellers and be the technical face of Confluent to AWS or GCP.

Successful Cloud Partner Solutions Architects typically have backgrounds as developers, systems engineers, or product specialists, but they all share a passion for expanding Confluent’s partner ecosystem and delivering the best of that world to our customers.

Resposibilites

  • Work with AWS/GCP  to build differentiated solutions and offerings that include Confluent Technology
  • Build and manage relationships with Key Technical leaders at AWS/GCP
  • Provide architecture guidance and recommendations across solutions, offerings and customer opportunities, including by understanding how to optimise for economic impact as well as performance.
  • Educate and enable Cloud partner Architects on Confluent products 
  • Serve as a subject matter expert to guide technology strategy and influence product direction by working across Product Management, Engineering, Sales, Marketing, etc
  • Participate in Webinars and public speaking
  • Author whitepapers, technical articles and blog posts
  • Create content, organize and deliver  technical workshops to enable and educate partners

Requirements

  • 10+ years working in partner or customer facing engineering roles
  • Deep knowledge of AWS/GCP strategy, products, organizational and operating models
  • Bachelor’s degree in Computer Science, a related field or equivalent practical experience
  • Demonstrated experience architecting enterprise solutions for customers and partners on AWS/GCP
  • Experience with messaging, streaming and ETL products commonly used in the enterprise
  • Experience authoring, presenting and delivering technical material
  • Experience operating within and across cross-functional teams including product managements, engineering, sales, marketing, etc
  • Familiarity with Linux, Java and software design principles
  • Excellent verbal and written communication skill, with focus on identifying shared business value around complex software solutions
  • Ability to quickly learn, understand and work with new and emerging technologies, methodologies and solutions
  • Passion for the role and strong commitment to excellence

What gives you and edge

  • Knowledge of Apache Kafka and/or other streaming technologies
  • Experience serving as Technical Sales/Systems Engineer in a cloud environment or equivalent experience in a customer and/or partner-facing role.
  • Experience designing and building big data, stream processing and/or other distributed systems for Fortune 1000 companies
  • Experience working with global teams

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Data Engineer
 
java python scala big data aws May 04
Atlassian is continuing to hire for all open roles with all interviewing and on-boarding done virtually due to COVID-19. Everyone new to the team, along with our current staff, will temporarily work from home until it is safe to return to our offices.

Atlassian is looking for a Data Engineer to join our Go-To Market Data Engineering (GTM-DE) team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale.

On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. Then you will move on to building micro-services, architecting, designing, and enabling self serve capabilities at scale to help Atlassian grow.

You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Presto, Spark, Airflow and Hive. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience.

More about you
As a data engineer in the GTM-DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases.

On your first day, we'll expect you to have:

  • At least 3 years professional experience as a software engineer or data engineer
  • A BS in Computer Science or equivalent experience
  • Strong programming skills (some combination of Python, Java, and Scala preferred)
  • Experience with data modeling
  • Knowledge of data warehousing concepts
  • Experience writing SQL, structuring data, and data storage practices
  • Experienced building data pipelines and micro services
  • Experience with Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data
  • A willingness to accept failure, learn and try again
  • An open mind to try solutions that may seem crazy at first
  • Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like)

It's preferred, but not technically required, that you have:

  • Experience building self-service tooling and platforms
  • Built and designed Kappa architecture platforms
  • A passion for building and running continuous integration pipelines.
  • Built pipelines using Databricks and well versed with their API’s
  • Contributed to open source projects (Ex: Operators in Airflow)
More about the team
Data is a BIG deal at Atlassian. We ingest over 180 billion events each month into our analytics platform and we have dozens of teams across the company driving their decisions and guiding their operations based on the data and services we provide.

It’s the data engineering teams job to make more Atlassian’s data driven and facilitate growth. We do this by providing metrics and other data elements which are reliable and trustworthy, as well as services and data products to help teams better self serve and improve their time to reliable insights.

You’ll be joining a team with a brand new mission, expanding into a new office. There will be plenty of challenges and scope to grow. We work very closely with Sales, Marketing and Commerce teams. We value when people ask hard questions and challenge each other to constantly improve our work. We are independent but love highly collaborative team environments, so you'll get the opportunity to work with lots of other awesome people just like you. We're all about enabling teams to execute growth and customer retention strategies by providing the right data fabrics and tools.

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Creating software that empowers everyone from small startups to the who’s who of tech is why we’re here. We build tools like Jira, Confluence, Bitbucket, and Trello to help teams across the world become more nimble, creative, and aligned—collaboration is the heart of every product we dream of at Atlassian. From Amsterdam and Austin, to Sydney and San Francisco, we’re looking for people who want to write the future and who believe that we can accomplish so much more together than apart. At Atlassian, we’re committed to an environment where everyone has the autonomy and freedom to thrive, as well as the support of like-minded colleagues who are motivated by a common goal to: Unleash the potential of every team.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.
Share this job:
Software Architect
Numbrs  
aws kubernetes docker java apache-kafka machine learning Apr 28

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will work in the Architecture team to support the Head of Technology in all the activities of the Technology department. You will be responsible and accountable for the oversight of all aspects of engineering operations, the architecture and design of Numbrs platform, and the delivery of services and solutions within Technology.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • a minimum of 5 years experience architecting, developing, evolving and troubleshooting large scale distributed systems
  • hands-on experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Java
  • knowledge of AWS, Kubernetes, and Docker
  • leadership experience
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication and interpersonal skills

Ideally, candidates will also have

  • experience with systems for automating deployment, scaling, and management of containerised applications, such as Kubernetes and Mesos
  • experience with machine learning and big data technologies, such as Kafka, Storm, Flink and Cassandra
  • experience with encryption and cryptography standards

Location: Remote

Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops sysadmin Apr 21

Numbrs is reshaping the future of the workplace. We are a fully remote company, at which every employee is free to live and work wherever they want.

Numbrs was founded with the vision to revolutionise banking. Therefore from day one Numbrs has always been a technology company, which is driven by a strong entrepreneurial spirit and the urge to innovate. We live and embrace technology.

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users.

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

Key Qualifications

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • background in Linux administration (mainly Debian)
  • scripting/programming knowledge of at least Unix shell scripting
  • good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • quick to learn and fast to adapt to changing environments
  • excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards
Share this job:
Confluent Kafka Production Engineer
python big data Apr 21
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

About the Team:

The next big goal for the company is to make it as easy as possible for anyone in the world to use Confluent’s products to build their next killer streaming application. To do that we need to offer Confluent’s products as a Platform as a Service (PaaS). In order for this product to be successful, we absolutely have to bring in world-class talent that is passionate about running large scale, multi-tenant distributed data systems for customers who expect a very high level of availability.

About the Role:

A KPE is a key member of the Kafka team at Confluent. You work closely with the team and other Confluent engineers to continuously build out and improve our PaaS offering. You will be part of the team responsible for key operational aspects (availability, reliability, performance, monitoring, emergency response, capacity planning, disaster recovery) of our Kafka systems in production. If you love the hum of big data systems, think about how to make them run as smoothly as possible, and want to have a big influence on the architecture plus operational design points of this new product, then you will fit right in.

Who You Are:

  • Smart, humble, and empathetic
  • Have a strong sense of teamwork and put team’s and company’s interests first
  • Driven and excited about the challenges of a fast-paced, innovative software startup environment

What We're Looking For:

  • Strong fundamentals in distributed systems design and operations
  • Familiarity with Kafka or similar high-scale distributed data systems
  • Experience building automation to operate large-scale data systems
  • Solid experience working with large private or public clouds
  • A self starter with the ability to work effectively in teams
  • Excellent spoken / written communication
  • Preferred proficiency with Python/Java, shell scripting, system diagnostic and automation tooling
  • Bachelor's degree in Computer Science or similar field or equivalent

What Gives You An Edge:

  • Experience operating Kafka at scale is a big plus
  • Experience working with JVMs a plus
  • Experience with systems performance a plus
Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.

Click here to review our California Candidate Privacy Notice, which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees.
Share this job:
Commercial Sales Engineer Intern
big data cloud Apr 15
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Join us as we pursue our mission of putting an event streaming platform at the heart of every business. We are a company filled with people who are passionate about our product and seek to deliver the best experience for our customers. At Confluent, we’re committed to our work, customers, having fun and most importantly to each other’s success. Learn more about Confluent careers and how you can become a part of our journey!

What is a Sales Engineer?

Confluent Pre-Sales Engineers drive the technical evaluation stage of the overall sales process, making them critical drivers of customer success as real time data streams become more and more important in the modern enterprise. In this role you will be the key technical advisor to the sales team, work very closely with the product management and engineering teams, and serve as a vital product advocate in front of prospects, customers, and the wider Kafka and Big Data communities.

As a Sales Engineer, you’ll combine your technical aptitude, exceptional communication skills, and creative problem solving to drive product adoption and success. Sales Engineers work alongside software engineers, product managers, and the sales team to engage with our customers in order to solve their most challenging problems using Confluent. Sales Engineers work with clients to understand real-world business problems and solve them by building & architecting technology.

What you'll work on:

  • Develop a project that will contribute to real-life improvements with high impact.
  • Evangelize our product and services to customers.
  • Collaborate with various Confluent teams.
  • Receive online/classroom training on the Confluent platform.

What we're looking for:

  • You are entering your final year in a BS degree program in computer science, engineering or a related discipline.
  • You have strong written and communication skills. 
  • You’re interested in, and/or have domain expertise in cloud technologies and IaaS platforms (AWS, GCP, Azure), data streaming technologies, or other integration platforms.
  • You have the ability to explain technical concepts to a wide range of audiences, you’re passionate about learning, and thrive under pressure.
Come join us if you’re looking for an internship that will allow you to use your technical skills, business acumen and entrepreneurial instincts! We’re looking for Sales Engineering Interns that can be a connector between people, technology, and business.

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Solutions Architect - Australia
java big data Apr 14
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Solutions Architects drive customer success by helping them realise business value from the burgeoning flow of realtime data streams in their organisations. In this role you’ll interact directly with our customers to provide expert consultancy, leveraging deep knowledge of best practices in the use of Apache Kafka, the broader Confluent Platform, and complementary systems like Hadoop, Spark, Storm, relational databases, and various NoSQL databases. Throughout all of these interactions, you’ll build strong relationships with customers, ensure exemplary delivery standards, and have a lot of fun building state-of-the-art data infrastructure alongside colleagues who are widely recognised as leaders in this space.

The role requires travel across APAC, to work on-site with our customers in the region. You'll be based in Australia with the ability to travel to client engagements as required.

What we're looking for:

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka
  • Exceptional interpersonal communications capabilities demonstrated through a history of successful B2B infrastructure software development projects
  • Strong desire to tackle hard technical problems and proven ability do so with little or no direct daily supervision
  • Bachelor’s level degree in Computer Science or an engineering, mathematics, or other quantitative field
  • Proficiency in Java or Python
  • A minimum of 5 year's experience in a Professional Services role
  • Prior experience of regular business travel around the region
  • Ability to travel up to 50% of your time to client engagements

What gives you an edge:

  • Previous experience building solutions that use Apache Kafka alongside Hadoop, relational and NoSQL databases, message queues, and related products
  • Solid understanding of basic systems operations (disk, network, operating systems, etc)
  • Experience building and operating large-scale systems
  • Any other languages such as Cantonese, Mandarin
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Growing FinTech company looking to hire a fully remote Senior DevOps
aws kubernetes docker terraform linux devops Apr 11

We are experiencing strong demand for our e-commerce payment service and are looking for a number of highly skilled individuals to join our DevOps team.  Please only apply if you're located within +/- 1 hour from CEST/CET.

We are constantly developing and always striving to improve our software solutions, automating as many processes as possible. You will work both independently and as part of a dedicated DevOps team of 3 colleagues from all around Europe.   At the moment we have some services in AWS and a big part at a local hosting partner. One of the tasks will be to change this distribution together with the team. Furthermore we're entering new markets this year, which also requires further enhancements of our current setup and passing compliance audits.

Expectations: You will be working in a fast-paced environment where changes are normal. You must be able to keep your head cool in a hectic and busy environment. You have a high degree of independence and it is important that you are able to manage several tasks at the same time - also when the deadline is short.  

We are looking for talents with:

  • Experience as a Linux technical specialist
  • Experience with AWS services: 1.EC2, EKS, RDS (MariaDB/MySQL), DynamoDB, Networking 2. AWS Big Data analytics services (Athena, S3, Glue, Redshift, etc)
  • Hands-on experience with Kubernetes
  • Experience in configuration management tools (Ansible, Terraform are preferable)
  • Maintenance of monitoring tools (InfluxDB/Graphite/Prometheus + Grafana)
  • Experience with migrations to AWS
  • Experience with the microservices in the cloud
  • Understanding of cloud networking principles
  • Experience with CI/CD pipelines (GitLab)
  • Administration of Java and Spring Boot applications
  • Familiarity with messaging systems (ActiveMQ, Camel, Kafka)
  • Good scripting skills (at least 1 language)
  • Eye for clean code
  • Experience with compliance processes like ISO27001 and PCI DSS

Our technology stack:

  • Docker
  • Kubernetes(EKS)
  • Terraform
  • AWS
  • Ansible
  • Grafana
  • Prometheus
  • GitLab
  • Kafka
  • ApacheMQ

Some of the upcoming tasks will be:

  • Take part in dockerization process of Spring Boot applications
  • Organize container orchestration with Kubernetes
  • Refactor our constantly changing code base
  • Implement best practices for our daily infrastructure operations
  • Align our infrastructure with compliance requirements
  • Manage CI/CD processes with team
  • Setup and maintain new environments in AWS
  • Improve and automate infrastructure development
  • Monitor metrics and develop ways to improve
  • Work closely with BI team to provide AWS analytics platform

Requirements:

  • You probably have a background as B.S. or M.Sc in computer science or similar
  • You have experience with highly automated systems
  • You are able to see solutions from the perspective of the end-user
  • You speak and write English fluently

About our team: We are a team of highly motivated developers who work remotely from our own offices. We collaborate much like open-source projects with core maintainers for our services. Each developer has a lot of freedom working in a flat hierarchy in a very streamlined process where the domain experts are easily available on Slack or via Hangout.   We work with a very rapid release schedule, often releasing multiple times per day. This gives us a quick and motivating feedback loop. This also makes it very easy for a developer to see their effect on business!  This allows us to experiment and adopt new trends/frameworks quickly.  

Share this job:
Big Data Engineer
big data python data science machine learning aws Apr 09

At CrowdStrike we’re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. We’ve earned numerous honors and top rankings for our technology, organization and people – clearly confirming our industry leadership and our special culture driving it. We also offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. So if you’re ready to work on unrivaled technology where your desire to be part of a collaborative team is met with a laser-focused mission to stop breaches and protect people globally, let’s talk.

About the Role

We are looking to hire a Big Data Engineer for the Data Engineering team at CrowdStrike. The Data Engineering team operates within the Data Science organization, and provides the necessary infrastructure and automation for users to analyze and act on vast quantities of data effortlessly. The team has one of the most critical roles to play in ensuring our products are best-in-class in the industry. You will interact with product managers and other engineers in building both internal and external facing services.

This position is open to candidates in Bucharest (Office or Romania Remote), Brasov, Cluj, Iasi and Timisoara (Remote)

You will:

  • Write jobs using PySpark to process billions of events per day
  • Fine tune existing Hadoop / Spark clusters
  • Rewrite some existing PIG jobs in PySpark

Key Qualifications

You have:

  • BS degree in Computer Science or related field
  • 7+ years of relevant work experience
  • Experience in building data pipelines at scale (Note: We process over 1 Trillion events per week)
  • Good knowledge of Hadoop / Spark /Apache Kafka, Python, AWS, PySpark and other tools in the Big Data ecosystem
  • Good programming skills – Python
  • Operation experience in the tuning of clusters for optimal data processing
  • Experience in building out ETL jobs at scale
  • Good knowledge of distributed system design and associated tradeoffs
  • Good knowledge of CI / CD and associated best practices
  • Familiarity with Docker-based development and orchestration

Bonus points awarded if you have:

  • Created automated / scalable infrastructure and pipelines for teams in the past
  • Contributed to the open source community (GitHub, Stack Overflow, blogging)
  • Prior experience with Spinnaker, Relational DBs, or KV Stores
  • Prior experience in the cybersecurity or intelligence fields

Benefits of Working at CrowdStrike:

  • Market leader in compensation
  • Comprehensive health benefits
  • Working with the latest technologies
  • Training budget (certifications, conferences)
  • Flexible work hours and remote friendly environment
  • Wellness programs
  • Stocked fridges, coffee, soda, and lots of treats
  • Peer recognition
  • Inclusive culture focused on people, customers and innovation
  • Regular team activities, including happy hours, community service events

We are committed to building an inclusive culture of belonging that not only embraces the diversity of our people but also reflects the diversity of the communities in which we work and the customers we serve. We know that the happiest and highest performing teams include people with diverse perspectives and ways of solving problems so we strive to attract and retain talent from all backgrounds and create workplaces where everyone feels empowered to bring their full, authentic selves to work.

CrowdStrike is an Equal Opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex including sexual orientation and gender identity, national origin, disability, protected veteran status, or any other characteristic protected by applicable federal, state, or local law.

Share this job:
Federal Solutions Architect - Secret Clearance
java python scala big data linux cloud Apr 06
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

We are looking for a Solutions Architect to join our Customer Success team. As a Solutions Architect (SA), you will help customers leverage streaming architectures and applications to achieve their business results. In this role, you will interact directly with customers to provide software architecture, design, and operations expertise that leverages your deep knowledge of and experience in Apache Kafka, the Confluent platform, and complementary systems such as Hadoop, Spark, Storm, relational and NoSQL databases. You will develop and advocate best practices, gather and validate critical product feedback, and help customers overcome their operational challenges.

Throughout all these interactions, you will build a strong relationship with your customer in a very short space of time, ensuring exemplary delivery standards. You will also have the opportunity to help customers build state-of-the-art streaming data infrastructure, in partnership with colleagues who are widely recognized as industry leaders, as well as optimizing and debugging customers existing deployments.

Location:
You will be based in LOCATION, with 50% travel expected.

Responsibilities

  • Helping a customer determine his/her platform and/or application strategy for moving to a more real-time, event-based business. Such engagements often involve remote preparation; presenting an onsite or remote workshop for the customer’s architects, developers, and operations teams; investigating (with Engineering and other coworkers) solutions to difficult challenges; and writing a recommendations summary doc.
  • Providing feedback to the Confluent Product and Engineering groups
  • Building tooling for another team or the wider company to help us push our technical boundaries and improve our ability to deliver consistently with high quality
  • Testing performance and functionality of new components developed by Engineering
  • Writing or editing documentation and knowledge base articles, including reference architecture materials and design patterns based on customer experiencesHoning your skills, building applications, or trying out new product featuresParticipating in community and industry events
  • Participating in community and industry events

Requirements

  • Deep experience designing, building, and operating in-production Big Data, stream processing, and/or enterprise data integration solutions, ideally using Apache Kafka
  • Demonstrated experience successfully managing multiple B2B infrastructure software development projects, including driving expansion, customer satisfaction, feature adoption, and retention
  • Experience operating Linux (configure, tune, and troubleshoot both RedHat and Debian-based distributions)
  • Experience using cloud providers (Amazon Web Services, Google Cloud, Microsoft Azure) for running high-throughput systems
  • Experience with Java Virtual Machine (JVM) tuning and troubleshooting
  • Experience with distributed systems (Kafka, Hadoop, Cassandra, etc.)
  • Proficiency in Java
  • Strong desire to tackle hard technical problems, and proven ability to do so with little or no direct daily supervision
  • Excellent communication skills, with an ability to clearly and concisely explain tricky issues and complex solutions
  • Ability to quickly learn new technologies
  • Ability and willingness to travel up to 50% of the time to meet with customers
  • TS/SCI clearance required

Bonus Points

  • Experience helping customers build Apache Kafka solutions alongside Hadoop technologies, relational and NoSQL databases, message queues, and related products
  • Experience with Scala, Python, or Go
  • Experience working with a commercial team and demonstrated business acumen
  • Experience working in a fast-paced technology start-up
  • Experience managing projects, using any known methodology to scope, manage, and deliver on plan no matter the complexity
  • Bachelor-level degree in computer science, engineering, mathematics, or another quantitative field


Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Senior Software Engineer
 
senior golang java python javascript c Apr 02
At Couchbase, big things happen. Every day, we’re translating vision into reality by tackling innovative and exciting challenges head-on with a team that prioritizes honesty, transparency, and humility. This is a breakthrough stage in our company, where the enthusiasm of our employees and leadership team is infectious and growing. You’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry. At Couchbase, you can go home knowing that you have an impact. What we do matters. Enjoy the journey.

This role is also open to remote work within US as our teams are distributed all over the world!

What are we up to?
If you like working on high performance enterprise software, then you’ll like this! As a Senior Software Engineer in our R&D team, you will help build cutting-edge distributed, shared-nothing architecture of our NoSQL software. You will be part of a team creating NoSQL database products used by hundreds of modern enterprises and applications, tackling the hardest problems our customers have, and employing your skills in - Golang, Java, C/C++, Erlang, JavaScript and/or Python (with possibly a few other languages thrown in for good measure). This is a very exciting time to build something new and innovative for the databases of the future. Like open source? So do we: Couchbase and its engineers are active open source contributors for Couchbase, memcached, and other projects.
While other NoSQL vendors may experience architectural limitations, Couchbase architects all of its own systems with a laser-focus on high performance use cases for largest modern enterprise applications. Your engineering contributions will be the key in helping Couchbase to keep our performance advantage over the competition.

RESPONSIBILITIES - You will
Translate product requirements into engineering requirements and write high quality performing code.
Design and implement mission-critical code coverage as it pertains to the data model for a scale-out database.
Debug and fix issues by participating in high quality code reviews
Align with Indexing, Support, Mobile, Search, Storage and Clustering teams to integrate new features into our data platform.
Engineer needle-moving tools and features with simplicity, elegance and economy.
Be agile! Think quality! Think leverage!

PREFERRED QUALIFICATIONS
You are passionate about database architecture and systems
You can hack in several of your preferred languages from C/C++, Java, and Python to Erlang and Go
You have multiple years of commercial and/or open source software experience
You think that distributed systems are amazing
You’re self-motivated, independent and a quick learning person who likes to take on challenges 
You like working in organizations that strive to stay ahead of the curve by rapidly moving the technological innovation.

MINIMUM QUALIFICATIONS
Masters Degree in Computer Science or commensurate experience 
You’re an excellent teammate
Excellent written and verbal communication skills
About Couchbase

Unlike other NoSQL databases, Couchbase provides an enterprise-class, multicloud to edge database that offers the robust capabilities required for business-critical applications on a highly scalable and available platform. Couchbase is built on open standards, combining the best of NoSQL with the power and familiarity of SQL, to simplify the transition from mainframe and relational databases.

Couchbase’s HQ is conveniently located in Santa Clara, CA with additional offices throughout the globe. We’re committed to a work environment where you can be happy and thrive, in and out of the office.

At Couchbase, you’ll get:
* A fantastic culture
* A focused, energetic team with aligned goals
* True collaboration with everyone playing their positions
* Great market opportunity and growth potential
* Time off when you need it.
* Regular team lunches and fully-stocked kitchens.
* Open, collaborative spaces.
* Competitive benefits and pre-tax commuter perks

Whether you’re a new grad or a proven expert, you’ll have the opportunity to learn new skills, grow your career, and work with the smartest, most passionate people in the industry.

Revolutionizing an industry requires a top-notch team. Become a part of ours today. Bring your big ideas and we'll take on the next great challenge together.

Check out some recent industry recognition:

Want to learn more? Check out our blog: https://blog.couchbase.com/

Couchbase is proud to be an equal opportunity workplace and is dedicated to pursuing, hiring and developing a diverse workforce. Individuals seeking employment at Couchbase are considered without regards to age, ancestry, color, gender (including pregnancy, childbirth, or related medical conditions), gender identity or expression, genetic information, marital status, medical condition, mental or physical disability, national origin, protected family care or medical leave status, race, religion (including beliefs and practices or the absence thereof), sexual orientation, military or veteran status, or any other characteristic protected by federal, state, or local laws.
Share this job:
Backend Engineer Data Team
aws java apache-spark hadoop hbase backend Mar 26

Sonatype’s mission is to enable organizations to better manage their software supply chain.  We offer a series of products and services including the Nexus Repository Manager and Nexus Lifecycle Manager. We are a remote and talented product development group, and we work in small autonomous teams to create high-quality products. Thousands of organizations and millions of developers use our software. If you have a passion for challenging problems, software craftsmanship and having an impact, then Sonatype is the right place for you. We are expanding our Data team, responsible for unlocking insight from vast amounts of software component data, powering our suite of products enabling our customers from making informed and automated decisions in managing their software supply chain. As a Backend Engineer, you will lead or contribute to designing, development, and monitoring of systems and solutions for collecting, storing, processing, and analyzing large data sets.  You will work in a team made up of Data Scientists and other Software Engineers. No one is going to tell you when to get up in the morning, or dole out a bunch of small tasks for you to do every single day. Members of Sonatype's Product organization have the internal drive and initiative to make the product vision a reality. Flow should be the predominate state of mind.

Requirements:

  • Deep software engineering experience; we primarily use Java.
  • Database and data manipulation skills working with relational or non-relational models.
  • Strong ability to select and integrate appropriate tools, frameworks, systems to build great solutions.
  • Deep curiosity for how things work and desire to make them better.
  • Legally authorized to work (without sponsorship) in Canada, Colombia, or the United States of America and are currently residing in the corresponding country.

Nice To Haves:

  • Degree in Computer Science, Engineering, or another quantitative field.
  • Knowledge and experience with non-relational databases (i.e., HBase, MongoDB, Cassandra).
  • Knowledge and experience with large-scale data tools and techniques (i.e., MapReduce, Hadoop, Hive, Spark).
  • Knowledge and experience with AWS Big Data services (i.e., EMR, ElasticSearch).
  • Experience working in a highly distributed environment, using modern collaboration tools to facilitate team communication.

What We Offer:

  • The opportunity to be part of an incredible, high-growth company, working on a team of experienced colleagues
  • Competitive salary package
  • Medical/Dental/Vision benefits
  • Business casual dress
  • Flexible work schedules that ensure time for you to be you
  • 2019 Best Places to Work Washington Post and Washingtonian
  • 2019 Wealthfront Top Career Launch Company
  • EY Entrepreneur of the Year 2019
  • Fast Company Top 50 Companies for Innovators
  • Glassdoor ranking of 4.9
  • Come see why we've won all of these awards
Share this job:
Senior Software Engineer, Backend
Numbrs  
java backend microservices kubernetes machine-learning senior Mar 25

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.

All candidates will have

  • a Bachelor's or higher degree in technical field of study or equivalent practical experience
  • experience with high volume production grade distributed systems
  • experience with micro-service based architecture
  • experience with software engineering best practices, coding standards, code reviews, testing and operations
  • hands-on experience with Spring Boot
  • professional experience in writing readable, testable and self-sustaining code
  • strong hands-on experience with Java (minimum 8 years)
  • knowledge of AWS, Kubernetes, and Docker
  • excellent troubleshooting and creative problem-solving abilities
  • excellent written and oral communication in English and interpersonal skills

Ideally, candidates will also have

  • experience with Big Data technologies such as Kafka, Spark, and Cassandra
  • experience with CI/CD toolchain products like Jira, Stash, Git, and Jenkins
  • fluent with functional, imperative and object-­oriented languages;
  • experience with Scala, C++, or Golang
  • knowledge of Machine Learning

Location: residence in UK mandatory; home office

Share this job:
Data Engineer - Lead Subject Matter Expert
data science big data Mar 13
Role:

At Springboard, we are on a mission to bridge the skills gap by delivering high-quality, affordable education for new economy skills. We’ve already launched hundreds of our students into Data Science careers through our top-rated Data Science course that pairs students with an industry mentor and offers them a Job Guarantee.  

Now we’re expanding our Data Science course offerings, and we’re looking for an expert who has a strong background in Data Engineering to help us build a new Data Engineering course in the coming months. This is a unique opportunity to put your expertise into action to educate the next generation of Data Engineers and increase your domain mastery through teaching. 

The course will be an online 6-to-9 month program designed to help students find a job within 6 months of completion. You’ll set the vision to ensure we’re teaching all that is needed to succeed as a Data Engineer. Your work will include creating projects and other materials to define students’ learning experiences.  

This role will be a part-time contract role for a duration of 3~4 months (starting immediately) with potential for ongoing consulting work. We estimate a workload of roughly 15-20 hours/ week. You can work with us out of our office in San Francisco or remotely. This is a paid engagement. 

Responsibilities:

You’ll work with our curriculum development team to create a Data Engineering course.

As part of this role, you will

  • Set the vision for effectively teaching key data engineering concepts and skills
  • Define learning objectives and course structure (units and projects)
  • Collaborate with the instructional designers and other subject matter experts to build the full curriculum. This includes:
  • Designing, writing, and building course projects (and associated resources)
  • Curating high-quality resources (videos, articles) that effectively teach course topics 
  • Writing descriptions that summarize and explain the importance of each topic covered in the course
  • Create rubrics for mentors to evaluate student work (especially course projects)

Experience

  • Currently working as a Data Engineer in the U.S. for 3+ years including experience with data warehousing, ETL, big data systems, data modeling and schema design, and owning data quality.
  • 1+ years of  experience hiring and/or managing Data Engineers
  • Passion for teaching. Previous teaching experience is a huge bonus.

Skills

  • Understanding of Data Engineering landscape and how the field varies across companies
  • Ability to identify the tools and industry practices students need to learn to successfully become Data Engineers
  • Clear point-of-view on what skills are needed for an entry level Data Engineer role and how to teach them in a structured manner 
  • Proven ability to create projects with clear instructions and documentation
  • Excellent verbal & written communication skills

You are

  • Able to work independently and produce high-quality work without extensive supervision 
  • Diligent about meeting deadlines
  • A collaborator working efficiently with a diverse group of individuals
  • Receptive and responsive to feedback and are willing to iterate on work
  • Passionate about education

Availability

  • 15-20 hours of work/week for 3-4 months, starting immediately
  • Must be available to connect synchronously during PST working hours on weekdays
  • Can be remote or work from our SF office
Share this job:
Backend Engineer, Data Processing Rust
backend java data science machine learning big data linux Mar 13
About Kraken

Our mission is to accelerate the adoption of cryptocurrency so that you and the rest of the world can achieve financial freedom and inclusion.  Founded in 2011 and with over 4 million clients, Kraken is one of the world's largest, most successful bitcoin exchanges and we're growing faster than ever. Our range of successful products are playing an important role in the mainstream adoption of crypto assets.  We attract people who constantly push themselves to think differently and chart exciting new paths in a rapidly growing industry. Kraken is a diverse group of dreamers and doers who see value in being radically transparent. Let's change the way the world thinks about money! Join the revolution!

About the Role

This is a fully remote role, we will consider applicants based in North America, South America and Europe

Our Engineering team is having a blast while delivering the most sophisticated crypto-trading platform out there. Help us continue to define and lead the industry.

As part of Kraken's Backend Data team, you will work within a world-class team of engineers building Kraken's infrastructure using Rust. As a Backend Engineer in Data Processing, you help design and build Fraud and Security detection systems leveraging Big data pipelines, Machine Learning and Rust.

Responsibilities:

  • Design and implementation of micro-services in Rust
  • Writing reusable, testable, and efficient code
  • Implementation of risk evaluation and anti-fraud systems, or similar scoring and anomaly detection systems
  • Pick and design adequate data processing storage and pipelines
  • Work with our Fraud/Data Science team or provide the Data Science know-how to support Product requirements

Requirements:

  • At least 5 years of experience in software engineering
  • Experience with Rust
  • Experience writing network services or asynchronous code
  • Python, Java or similar work experience
  • Working knowledge using Kafka, Pulsar or similar
  • Experience using a Linux server environment
  • Ability to independently debug problems involving the network and operating system

A strong candidate will also:

  • Be familiar with deployment using Docker
  • Have previous work experience on Risk scoring or anomaly detection systems
  • Have experience with Machine Learning and its ecosystem
  • Have experience with other strongly typed programming languages
  • Have experience using SQL and distributed data solutions like Spark, Hadoop or Druid
  • Be passionate about secure, reliable and fast software
We’re powered by people from around the world with their own unique backgrounds and experiences. We value all Krakenites and their talents, contributions, and perspectives.

Check out all our open roles at https://jobs.lever.co/kraken. We’re excited to see what you’re made of.  

Learn more about us:
Share this job:
Full Stack Engineer - DSS
Dataiku  
full stack java python javascript scala big data Mar 13
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



As a full stack developer in the Dataiku engineering team, you will play a crucial role in helping us have a real impact on the daily life of data analysts and scientists. You will be joining one of 3 teams that develop new features and improve existing parts of  Data Science Studio (DSS) based on user feedback.

DSS is an on-premises application that connects together all big data technologies. We work with SQL databases, Spark, Kubernetes, Hadoop, Elasticsearch, MLlib, scikit-learn, Shiny, … and many more. Basically, our technological stack is made of all the technologies present in Technoslavia!

Our backend is mainly written in Java but also includes large chunks in Scala, Python and R. Our frontend is based on Angular and also makes vast usage of d3.js.

One of the most unique characteristics of DSS is the breadth of its scope and the fact that it caters both to data analysts (with visual and easy to use analytics) and data scientists (with deep integration in code and libraries, and a web-based IDE).

This is a full-time position, based in France either in our Paris office or remote.

Your missions

  • Turn ideas or simplistic specifications into full-fledged product features, including unit and end-to-end tests.
  • Tackle complex problems that range from performance and scalability to usability, so that complicated machineries look straightforward and simple to use for our users.
  • Help your coworkers: review code, spread your technical expertise, improve our tool chain
  • Bring your energy to the team!

You are the ideal recruit if

  • You are mastering a programming language (Java, C#, Python, Javascript, You-name-it, ...).
  • You know that low-level Java code and slick web applications in Javascript are two sides of the same coin and are eager to use both.
  • You know that ACID is not a chemistry term.
  • You have a first experience (either professional or personal) building a real product or working with big data or cloud technologies.

Hiring process

  • Initial call with the talent acquisition manager
  • On-site meeting (or video call) with the hiring manager
  • Home test to show your skills
  • Final on-site interviews


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Site Reliability Engineer
Numbrs  
go kubernetes aws docker devops machine learning Mar 11

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.

Job Description

You will be a part of a team that is responsible for deploying, supporting, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume; documenting the IT infrastructure, policies and procedures. You will also be part of an on-call rotation.

All candidates will have

  • a Bachelor's or higher degree in technical field of study
  • a minimum of 5 years experience deploying, monitoring and troubleshooting large scale distributed systems
  • Background in Linux administration (mainly Debian)
  • Scripting/programming knowledge of at least Unix shell scripting
  • Good networking understanding (TCP/IP, DNS, routing, firewalls, etc.)
  • Good understanding of technologies such as Apache, Nginx, Databases (relational and key-value), DNS servers, SMTP servers, etc.
  • Understanding of cloud-based infrastructure, such as AWS
  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes
  • Quick to learn and fast to adapt to changing environments
  • Excellent communication and documentation skills
  • excellent troubleshooting and creative problem-solving abilities
  • Excellent communication and organisational skills in English

Ideally, candidates will also have

  • experience deploying and supporting big data technologies, such as Kafka, Spark, Storm and Cassandra
  • experience maintaining continuous integration and delivery pipelines with tools such as Jenkins and Spinnaker
  • experience implementing, operating and supporting open source tools for network and security monitoring and management on Linux/Unix platforms
  • experience with encryption and cryptography standards

Location: Zurich, Switzerland

Share this job:
Director of Sales Engineering - Central Europe
Dataiku  
executive python machine learning big data Mar 11
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is hiring a Director of Sales Engineering to oversee our Central Europe team of Sales Engineers. The position should be based in Frankfurt, Berlin or Munich. 

The Sales Engineering function at Dataiku is the primary technical function within the Sales organization, providing technical support (both for presales and post sales) to the Account Executives and directly contributing to Dataiku’s revenue objectives. As the “trusted advisors” in the sales process, the Sales Engineers help to build interest in Dataiku, build the solution to the prospect’s needs, and then build the evaluation to provide the prospect/customer with the proof that they need to make their purchasing decision. 

The Director role is key in growing Dataiku’s business in Central Europe ; s.he should work as an individual contributor and lead the team. S.he will support objectives related to our ability to deliver compelling, highly technical customer engagements in the field. Key responsibilities in the coming months will be the enablement of the existing team, the hiring and retaining of top talents and ensuring excellence in our execution. 

You’ll report directly to the Regional Vice President of Sales Engineering  for EMEA.

RESPONSIBILITIES:

  • Lead a team of Sales Engineers helping to ensure technical success throughout the sales process
  • Be the main technical point of contact for the VP sales Central Europe to strategize on opportunities, give reliable visibility to the pipe, and train / coach the sales team on technical topics
  • Mentor/coach team members during on-boarding and subsequent phases to ensure proper ramping of skills and capabilities
  • Mentor / coach team members on a day to day basis: brainstorm on the strategy to adopt for each opportunity, provide constructive feedbacksInteract with customers and prospects to understand their business challenges and engage in evaluation process
  • Build strong working relationships with cross functional teams to ensure alignment between pre and post sales activities
  • Work with cross functional teams, product management, R&D, and other organizations to ensure alignment, provide process and product feedback, and resolve critical customer situations

REQUIREMENTS

  • 5+ year’s experience in sales engineering of enterprise software products, big data tech experience preferred
  • 2+ year’s related Sales Engineering management experience is preferredExperience in complex / large-scale enterprise analytics deploymentFamiliarity with Python and/or R
  • Experience in data storage and computing infrastructure for data of all sizes (SQL, NoSQL, Hadoop, Spark, on-premise, and cloud)
  • Knowledge of machine learning libraries and techniques
  • Experience with visualization and dashboarding solutions
  • Excellent communication and public speaking skills
  • Native level in German and good communication skills in English
  • Ability to travel 10 to 40%
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

Responsibilities

  • Providing technical solutions and responding to technical requests from customers through our different channels: mail, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customersImprove efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams

Requirements

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational database and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface both with technical and non-technical individuals as needed

Nice to haves...

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning

Benefits

  • Opportunity to join Dataiku early on and help scale the company
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to Paris (our European HQ)
  • Opportunity to work with a smart, passionate and driven team
  • Dataiku has a strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Technical Support Engineer
Dataiku  
python data science big data docker cloud azure Mar 10
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Technical Support engineer to join its rapidly growing international team (with members distributed across the US, EMEA, and APAC regions). The ideal candidate is an autonomous individual who is passionate about getting big data and data science technologies working together to solve business problems, and who will efficiently help customers solve their technical issues with Dataiku DSS. It is a great opportunity to join Dataiku early on and help scale that critical function for the company.

As a Technical Support Engineer, you are a polished communicator and a trusted technical resource. You have worked with sophisticated/demanding customers, and you demonstrate excellent judgment in prioritization and are a multi-tasker. You love learning new cutting-edge technologies and getting your hands dirty to solve challenging technical problems. You are naturally driven to become the expert in the space.

We are looking for someone in the US to help with providing world-class support to our Federal customer base. In particular, this position will require the individual to be either a US citizen or qualified green card holder. Clearance is not necessary but would be a plus.

Responsibilities:

  • Providing technical solutions and responding to technical requests from customers through our different channels: email, chat, web conference, and support portal
  • Managing and resolving support issues with a high degree of technical complexity
  • Acting as a liaison between clients and other Dataiku teams (Customer Success, Engineering, Data Science, etc.) to help deliver a fast and efficient resolution to issues or questions raised from various customers
  • Improve efficiencies by documenting and standardizing support processes for our customers along with capturing/developing best practices 
  • Developing tools that will help in diagnosing, resolving or triaging hard-to-get-at problems as efficiently and promptly as possible
  • Documenting knowledge in the form of incident notes, technical articles, and contributions to knowledge base or forums within specific areas of expertise
  • Timely follow-up on customer commitments, effectively prioritizing process / product refinements; relaying lessons learned and feedback internally to our other client-facing and technical teams
  • Providing support to some of our largest, most challenging Federal and Enterprise accounts

Requirements:

  • BS in an Engineering or advanced analytics field, or equivalent practical experience
  • A strong competency in technical problem solving, with experience in working with advanced log analysis and various debugging techniques
  • Working proficiency with Unix-based operating systems and general systems administration knowledge (i.e. command line interface, SSH, handling permissions, file limits, networking, resource utilization, etc.)
  • Experience working with a programming language
  • Experience working with at least one type of relational databases and SQL
  • Excellent problem solving and analytical skills with an aptitude for learning new technologies
  • Ability to be autonomous, resourceful, and a proactive self-starter, while also remaining process-oriented and a team player
  • Strong communication skills and the ability to interface with both technical and non-technical individuals as needed
  • US citizen or green card holder

Bonus Points:

  • At least 3-5 years of experience minimum in a client-facing engineering or technical role, ideally involving a complex and rapidly evolving software/product
  • Technical understanding of the analytics and big data technologies (Hadoop, Spark, SQL databases and Data Warehouses) is a definite plus
  • Prior experience with and demonstrated interest in staying up to date on the latest data technologies (Python, R, Hadoop, Jupyter notebooks, Spark, H2O, Docker/Kubernetes, etc.)
  • Hands-on experience with Python and/or R
  • Experience working with various APIs
  • Experience with authentication and authorization systems like LDAP, SAML, and Kerberos
  • Working knowledge of various cloud technologies (AWS, Azure, GCP, etc.)
  • Some knowledge in data science and/or machine learning
  • Experience or proven track record working with Federal clients

Benefits:

  • Opportunity to join Dataiku at an early stage and help scale the Support organization
  • Competitive compensation package, equity, health benefits, and paid vacation
  • Trips to our different offices (Paris, NYC, etc.)
  • Opportunity to work with a smart, passionate, and driven team
  • Startup atmosphere: Free foods and drinks, foosball/FIFA/ping pong, company happy hours and team days, and more
  • Strong culture based on key values: Ownership, Passion, Autonomy and Friendliness
To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Enterprise Account Executive - Financial Services
executive c saas big data Feb 25
Dubbed an "open-source unicorn" by Forbes, Confluent is the fastest-growing enterprise subscription company our investors have ever seen. And how are we growing so fast? By pioneering a new technology category with an event streaming platform, which enables companies to leverage their data as a continually updating stream of events, not as static snapshots. This innovation has led Sequoia Capital, Benchmark, and Index Ventures to recently invest a combined $125 million in our Series D financing. Our product has been adopted by Fortune 100 customers across all industries, and we’re being led by the best in the space—our founders were the original creators of Apache Kafka®. We’re looking for talented and amazing team players who want to accelerate our growth, while doing some of the best work of their careers. Join us as we build the next transformative technology platform!

Enterprise Account Executives play a key role in driving Confluent’s sales activities in region. This role includes activities developing and executing on the go-to-market strategy for your territory. The ideal candidate needs to have experience selling complex Database, Messaging, Big Data, Open Source and/or SaaS into large corporate and multi national companies.

What you will do:

  • Build awareness for Kafka and the Confluent Platform within large enterprises
  • Aggressively prospect, identify, qualify and develop sales pipeline
  • Close business to exceed monthly, quarterly and annual bookings objectives
  • Build strong and effective relationships, resulting in growth opportunities
  • Build and maintain relationships with new and existing Confluent partners

What we are looking for:

  • An ability to articulate and sell the business value of big data and the impact on businesses of all sizes
  • Deep experience selling within the Database, Open Source, Messaging or Big Data space
  • 5+ years experience selling enterprise technology in a fast-paced and competitive marketExperience selling to developers and C level executives
  • Highly motivated, over achiever, team player
  • Strong analytical and writing abilities
  • Exceptional presentation skills
  • Entrepreneurial spirit/mindset, flexibility toward dynamic change
  • Goal oriented, with a track record of overachievement (President’s Club, Rep of the Year, etc.)

Why you will enjoy working here:

  • We’re solving hard problems that are relevant in every industry
  • Your growth is important to us, we want you to thrive here
  • You will be challenged on a daily basis
  • We’re a company that truly values a #oneteam mindset
  • We have great benefits to support you AND your family
Culture is a huge part of Confluent, we’re searching for the best people who not only excel at their role, but also contribute to the health, happiness and growth of the company. Inclusivity and openness are important traits, with regular company wide and team events. Here are some of the personal qualities we’re looking for: 

Smart, humble and empathetic
Hard working, you get things done
Hungry to learn in a field which is ever evolving
Adaptable to the myriad of challenges each day can present
Inquisitive and not afraid to ask all the questions, no matter how basic
Ready to roll up your sleeves and help others, getting involved in projects where you feel you can add value
Strive for excellence in your work, your team and the company 

Come and build with us. We are one of the fastest growing software companies in the market. A company built on the tenets of transparency, direct communication and inclusivity. Come meet the streams dream team and have a direct impact on how we shape Confluent.

#LI-NF1

Come As You Are

At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact.
Share this job:
Project Management Curriculum Writer
project-management agile kanban data science big data cloud Feb 22

Project Management Curriculum Writer

  • Education
  • Remote
  • Contract

Who We Are Thinkful is a new type of school that brings high-growth tech careers to ambitious people everywhere. We provide 1-on-1 learning through our network of industry experts, hiring partners, and online platform to deliver a structured and flexible education. Thinkful offers programs in web development, data science, and design, with in-person communities in up-and-coming tech hubs around the U.S. To join the Thinkful network visit thinkful.com.

Job Description Thinkful is launching a new Technical Project Management program which aims to be the best in-class remote, part-time Technical Project Management program offered today. As part of this effort, we're looking for a Technical Project Management subject matter expert to join us in executing on our content roadmap for this exciting new program. You will be creating the backbone of a new program that propels people from a background in academia and the sciences into an impactful career as Technical Project Manager. You'll produce written content, lesson plans including instructor notes and student activity descriptions, presentation decks, assessments, learning objectives and written content, all to support our students as they learn the core skills of data science. Your work product will be extremely impactful, as it forms the core asset around which the daily experience of our students will revolve. 

Responsibilities

  • Consistently deliver content that meets spec and is on time to support our program launch roadmap.
  • Create daily lesson plans consisting of 
  • Presentation decks that instructors use to lecture students on a given learning objective.
  • Instructor notes that instructors use alongside 
  • Activity descriptions — these are notes describing tasks students complete together in order to advance the learning objective in a given lecture.
  • Creates curriculum checkpoint content on specific learning objectives. In addition to the in-class experience, our students also spend time reading and completing tasks for a written curriculum hosted on the Thinkful platform. 
  • Creates code assets where necessary to support lesson plans, student activities, and written curriculum content.
  • Iterates on deliverables based on user feedback

Requirements

  • 3+ years of hands-on Technical Project Management industry experience 
  • Demonstrated subject matter expert in Technical Project Management 
  • Managing projects using Agile, Kanban and six Sigma methodologies
  • Work on multiple projects, all complexity levels, in an environment with changing priorities
  • Change management expertise 
  • Web application development experience 
  • Running large scale big data projects and or AWS cloud based projects
  • Collaborative.You enjoy partnering with people and have excellent project management skills and follow through
  • Excellent writing skills. You've got a gift for writing about complicated concepts in a beginner-friendly way. You can produce high-quality prose as well as high-quality presentations.

Compensation and Benefit

  • Contract position with a collaborative team
  • Ability to work remotely with flexible hours 
  • Access to all available course curriculum for personal use
  • Membership to a global community of over 500 Software Engineers, Developers, and Data Scientists who, like you, want to keep their skills sharp and help learners break into the industry
Share this job:
Big Data ETL, Architecture
amazon-redshift amazon-redshift-spectrum postgis amazon-s3 data-structures big data Feb 20

Lean Media is looking for experts to help us with the ongoing import, enrichment (including geospatial), and architecture of big datasets (millions to billions of records at a time).

Our infrastructure and tech stack includes:

  • Amazon Redshift, Spectrum, Athena
  • AWS Lambda
  • AWS S3 Data Lakes
  • PostgreSQL, PostGIS
  • Apache Superset

We are looking for expertise in:

  • Building efficient ETL pipelines, including enrichments
  • Best practices regarding the ongoing ingestion of big datasets from disparate sources
  • High performance enrichment of geospatial data
  • Optimizing data structures as they relate to achieving performant queries via analytics tools
  • Architecting a sustainable data infrastructure supporting all of the above

While this posting is for a contract position, we are open to short projects, ongoing engagements, and even full time employment opportunities. If you have a high degree of skill and experience in the area of big data architecture in AWS, then please let us know!

Share this job:
Senior Data Engineer
apache machine-learning algorithm senior python scala Feb 19

SemanticBits is looking for a talented Senior Data Engineer who is eager to apply computer science, software engineering, databases, and distributed/parallel processing frameworks to prepare big data for the use of data analysts and data scientists. You will mentor junior engineers and deliver data acquisition, transformations, cleansing, conversion, compression, and loading of data into data and analytics models. You will work in partnership with data scientists and analysts to understand use cases, data needs, and outcome objectives. You are a practitioner of advanced data modeling and optimization of data and analytics solutions at scale. Expert in data management, data access (big data, data marts, etc.), programming, and data modeling; and familiar with analytic algorithms and applications (like machine learning).

Requirements

  • Bachelor’s degree in computer science (or related) and eight years of professional experience
  • Strong knowledge of computer science fundamentals: object-oriented design and programming, data structures, algorithms, databases (SQL and relational design), networking
  • Demonstrable experience engineering scalable data processing pipelines.
  • Demonstrable expertise with Python, Spark, and wrangling of various data formats - Parquet, CSV, XML, JSON.
  • Experience with the following technologies is highly desirable: Redshift (w/Spectrum), Hadoop, Apache NiFi, Airflow, Apache Kafka, Apache Superset, Flask, Node.js, Express, AWS EMR, Scala, Tableau, Looker, Dremio
  • Experience with Agile methodology, using test-driven development.
  • Excellent command of written and spoken EnglishSelf-driven problem solver
Share this job:
Cloud Architect for Enterprise AI - Remote
Dataiku  
cloud data science big data linux aws azure Feb 18
Dataiku’s mission is big: to enable all people throughout companies around the world to use data by removing friction surrounding data access, cleaning, modeling, deployment, and more. But it’s not just about technology and processes; at Dataiku, we also believe that people (including our people!) are a critical piece of the equation.



Dataiku is looking for an experienced Cloud Architect to join its Field Engineering Team to support the deployment of its Enterprise AI Platform (Dataiku DSS)  to an ever growing customer base. 

As a Cloud Architect, you’ll work with customers at every stage of their relationship with Dataiku - from the initial evaluations to enterprise-wide deployments. In this role, you will help customers to design, build and run their Data Science and AI Enterprise Platforms.

This role requires adaptability, inventiveness, and strong communication skills. Sometimes you will work with clients on traditional big data technologies such as SQL data warehouses and on-premise Hadoop data lakes, while at other times you will be helping them to discover and implement the most cutting edge tools; Spark on Kubernetes, cloud-based elastic compute engines,  and GPUs. If you are interested in staying at the bleeding edge of big data and AI while maintaining a strong working knowledge of existing enterprise systems, this will be a great fit for you.

The position can be based remotely.

Responsibilities

  • Evangelize the challenges of building Enterprise Data Science Platforms to technical and non-technical audiences
  • Understand customer requirements in terms of scalability, availability and security and provide architecture recommendations
  • Deploy Dataiku DSS in a large variety of technical environments (on prem/cloud, hadoop, kubernetes, spark, …)
  • Design and build reference architectures, howtos, scripts and various helpers  to make the deployment and maintenance of Dataiku DSS smooth and easy
  • Automate operation, installation, and monitoring of the data science ecosystem components in our infrastructure stack
  • Provide advanced support for strategic customers on deployment and scalability issues
  • Coordinate with Revenue and Customer teams to deliver a consistent experience to our customers
  • Train our clients and partners in the art and science of administering a bleeding-edge Elastic AI platform

Requirements

  • Strong Linux system administration experience
  • Grit when faced with technical issues. You don’t rest until you understand why it does not work.
  • Comfort and confidence in client-facing interactions
  • Ability to work both pre and post sale
  • Experience with cloud based services like AWS, Azure and GCP
  • Hands-on experience with the Hadoop and/or Spark ecosystem for setup, administration, troubleshooting and tuning
  • Hands-on experience with the Kubernetes ecosystem for setup, administration, troubleshooting and tuning
  • Some experience with Python
  • Familiarity with Ansible or other application deployment tools

Bonus points for any of these

  • Experience with authentication and authorization systems like LDAP, Kerberos, AD, and IAM
  • Experience debugging networking issues such as DNS resolutions, proxy settings, and security groups
  • Some knowledge in data science and/or machine learning
  • Some knowledge of Java

Benefits

  • Work on the newest, best, big data technologies for a unicorn startup
  • Consult on AI infrastructure for some of the largest companies in the world
  • Equity
  • Opportunity for international exchange to another Dataiku office
  • Attend and present at big data conferences
  • Startup atmosphere: Free foods and drinks, international atmosphere, general good times and friendly people


To fulfill its mission, Dataiku is growing fast! In 2019, we achieved unicorn status, went from 200 to 400 people and opened new offices across the globe. We now serve our global customer base from our headquarters in New York City as well as offices in Paris, London, Munich, Amsterdam, Denver, Los Angeles, Singapore, Sydney and Dubaï. Each of them has a unique culture, but underpinning local nuances, we always value curiosity, collaboration, and can-do attitudes!
Share this job:
Site Reliability Engineer
hadoop linux bigdata python ruby c Feb 14

The Wikimedia Foundation is hiring two Site Reliability Engineers to support and maintain (1) the data and statistics infrastructure that powers a big part of decision making in the Foundation and in the Wiki community, and (2) the search infrastructure that underpins all search on Wikipedia and its sister projects. This includes everything from eliminating boring things from your daily workflow by automating them, to upgrading a multi-petabyte Hadoop or multi-terabyte Search cluster to the next upstream version without impacting uptime and users.

We're looking for an experienced candidate who's excited about working with big data systems. Ideally you will already have some experience working with software like Hadoop, Kafka, ElasticSearch, Spark and other members of the distributed computing world. Since you'll be joining an existing team of SREs you'll have plenty of space and opportunities to get familiar with our tech (AnalyticsSearchWDQS), so there's no need to immediately have the answer to every question.

We are a full-time distributed team with no one working out of the actual Wikimedia office, so we are all together in the same remote boat. Part of the team is in Europe and part in the United States. We see each other in person two or three times a year, either during one of our off-sites (most recently in Europe), the Wikimedia All Hands (once a year), or Wikimania, the annual international conference for the Wiki community.

Here are some examples of projects we've been tackling lately that you might be involved with:

  •  Integrating an open-source GPU software platform like AMD ROCm in Hadoop and in the Tensorflow-related ecosystem
  •  Improving the security of our data by adding Kerberos authentication to the analytics Hadoop cluster and its satellite systems
  •  Scaling the Wikidata query service, a semantic query endpoint for graph databases
  •  Building the Foundation's new event data platform infrastructure
  •  Implementing alarms that alert the team of possible data loss or data corruption
  •  Building a new and improved Jupyter notebooks ecosystem for the Foundation and the community to use
  •  Building and deploying services in Kubernetes with Helm
  •  Upgrading the cluster to Hadoop 3
  •  Replacing Oozie by Airflow as a workflow scheduler

And these are our more formal requirements:

  •    Couple years experience in an SRE/Operations/DevOps role as part of a team
  •    Experience in supporting complex web applications running highly available and high traffic infrastructure based on Linux
  •    Comfortable with configuration management and orchestration tools (Puppet, Ansible, Chef, SaltStack, etc.), and modern observability       infrastructure (monitoring, metrics and logging)
  •    An appetite for the automation and streamlining of tasks
  •    Willingness to work with JVM-based systems  
  •    Comfortable with shell and scripting languages used in an SRE/Operations engineering context (e.g. Python, Go, Bash, Ruby, etc.)
  •    Good understanding of Linux/Unix fundamentals and debugging skills
  •    Strong English language skills and ability to work independently, as an effective part of a globally distributed team
  •    B.S. or M.S. in Computer Science, related field or equivalent in related work experience. Do not feel you need a degree to apply; we value hands-on experience most of all.

The Wikimedia Foundation is... 

...the nonprofit organization that hosts and operates Wikipedia and the other Wikimedia free knowledge projects. Our vision is a world in which every single human can freely share in the sum of all knowledge. We believe that everyone has the potential to contribute something to our shared knowledge, and that everyone should be able to access that knowledge, free of interference. We host the Wikimedia projects, build software experiences for reading, contributing, and sharing Wikimedia content, support the volunteer communities and partners who make Wikimedia possible, and advocate for policies that enable Wikimedia and free knowledge to thrive. The Wikimedia Foundation is a charitable, not-for-profit organization that relies on donations. We receive financial support from millions of individuals around the world, with an average donation of about $15. We also receive donations through institutional grants and gifts. The Wikimedia Foundation is a United States 501(c)(3) tax-exempt organization with offices in San Francisco, California, USA.

The Wikimedia Foundation is an equal opportunity employer, and we encourage people with a diverse range of backgrounds to apply.

U.S. Benefits & Perks*

  • Fully paid medical, dental and vision coverage for employees and their eligible families (yes, fully paid premiums!)
  • The Wellness Program provides reimbursement for mind, body and soul activities such as fitness memberships, baby sitting, continuing education and much more
  • The 401(k) retirement plan offers matched contributions at 4% of annual salary
  • Flexible and generous time off - vacation, sick and volunteer days, plus 19 paid holidays - including the last week of the year.
  • Family friendly! 100% paid new parent leave for seven weeks plus an additional five weeks for pregnancy, flexible options to phase back in after leave, fully equipped lactation room.
  • For those emergency moments - long and short term disability, life insurance (2x salary) and an employee assistance program
  • Pre-tax savings plans for health care, child care, elder care, public transportation and parking expenses
  • Telecommuting and flexible work schedules available
  • Appropriate fuel for thinking and coding (aka, a pantry full of treats) and monthly massages to help staff relax
  • Great colleagues - diverse staff and contractors speaking dozens of languages from around the world, fantastic intellectual discourse, mission-driven and intensely passionate people

*Eligible international workers' benefits are specific to their location and dependent on their employer of record

Share this job: