< class="h3">Company Description

You will join a world-class team of engineers and data scientists from Facebook, Uber, Amazon and Google. We are a fast growing consulting firm based in Toronto with clients ranging from leading startups building impactful technologies to Fortune 500 companies looking to scale their engineering and data capabilities.
< class="h3">Job DescriptionWe are looking for a data engineer who is passionate about analytics and helping companies build and scale data. You enjoy working with data and are motivated to produce high quality data tools and pipelines that help empower other data scientists. You are experienced in architecting data ETL workflows and schemas. Critical thinking and problem-solving skills are essential for this role.
< class="h3">Qualifications- BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
- Hands on experience working with user engagement, social, marketing, and/or finance data
- Proficient in Python (i.e. Pandas, Numpy, scikit-learn, etc), R, TensorFlow, amongst other data science related tools and libraries
- Extensive experience working on relational databases, designing complex data schemas, and writing SQL queries
- Deep knowledge on performance tuning of ETL Jobs, SQL, and databases
- Working knowledge of Snowflake
- Experience working with Airflow is a strong plus
- Devops experiences is a plus
We have very competitive compensation.
Work on cool projects based on your interests and skills. We believe in accountability and NOT micro-management.

< class="h3">Company Description

You will join a world-class team of engineers and data scientists from Facebook, Uber, Amazon and Google. We are a fast growing consulting firm based in Toronto with clients ranging from leading startups building impactful technologies to Fortune 500 companies looking to scale their engineering and data capabilities.
< class="h3">Job DescriptionWe are looking for a Data Science Lead who will:
-
Partner closely with our clients' executive and technical teams to guide their decisions relating to AI, analytics, data engineering and data in general
-
Proactively identify and champion projects that solve complex problems across multiple domains
-
Apply specialized skills and fundamental data science methods (e.g. experimental design, recommender systems and machine learning when needed) to inform improvements to our clients' businesses
-
Design and implement end-to-end data pipelines: work closely with stakeholders to build models, tables or schemas that support our clients' business processes
-
Empower teams both internal and external to build and maintain actionable KPIs, production-quality dashboards, informative deep es, and scalable data products
- BS (or higher, e.g., MS, or PhD) in Computer Science, Engineering, Math, or Statistics
- 3-7 years of commercial experience as a data science lead solving high impact business problems, including 1+ years leading and maintaining high performance data science teams
- Extensive experience using Python including a strong grasp of object oriented programming (OOP) fundamentals
- Experience building data pipelines and driving ETL design decisions
- Experience delivering business impact through analysis and data products
- Technical leadership and management experience where you can jump into the code to the level that your peers admire and respect, and are also interested in helping mentor your peers
- Experience creating data product strategies, shipping data products, iterating after launch, and trying again
- Strong ability to prioritize and communicate to technical and non-technical audiences alike
- Working knowledge of Snowflake
- Devops experiences a plus
We have very competitive compensation.
Work on cool projects based on your interests and skills. We believe in accountability and NOT micro-management.


datadataukuk
Senior Data Scientist
Adarga is a rapidly scaling organisation, and we are growing our technology department. As a Senior Data Scientist within the product team, you will bring your expertise to a full stack technology team working to solve Natural Language Processing (NLP) linguistic challenges. Adarga is product focussed and is developing the world’s leading AI software to provide effortless access to knowledge from real world data and enable our customers to make better decisions.
We are a remote first organisation with 90 people based across the United Kingdom.
What will you be doing?
We are looking for an experienced Data Scientist with a strong mathematical background and experience in NLP. You will be hands-on with day-to-day project work focussed on developing the best solutions to deliver a roadmap which is aligned to core product features. You will also work alongside a product manager to develop new features. The senior data scientist will be working on projects which continually develop and improve our AI software using the appropriate MLOps tools to deploy, scale and monitor models in production.
As the Senior Data Scientist, you will report to the product tech lead and Data Science Manager.
Essential skills
- Detailed knowledge of AI techniques, including how to train, fine tune and apply deep learning models.
- NLP research is advancing at a rapid pace, an enthusiasm for continual learning is required to maintain your expertise.
- An experimentation mindset to solve business problems.
- Experience building Data Science solutions in a commercial environment and the ability to quantify improvements based upon analysis.
- Good awareness of software engineering and coding best practices.
- Excellent communication skills and awareness of project management techniques, capable of operating within a team of 6-7 people to influence and develop best practice.
- Ability to thrive effectively in predominantly remote working environment.
- PhD or equivalent professional experience in a data science role.
Nice to have skills
- Experience of modern NLP techniques is an advantage.
- Experience of using data science platforms and frameworks, for data tagging, model training and benchmarking.
- Experience building and deploying solutions to the Cloud.
- Experience of the following technologies would be beneficial, Python, Seldon, Helm, Pachyderm, AWS or other cloud technologies, Kubernetes, Docker, Knowledge graphs, Graph databases, SQL and Relational databases.
Further information
At Adarga we use Kanban principles and daily stand-ups to track work tasks and it will be your responsibility to attend these and document your work to ensure it is repeatable.
It will be necessary to take part in cross functional and cross team communication to enhance collaboration, these will take the form of knowledge shares, reading groups, chapter group meetings, workshops and innovation projects. You will be set quarterly OKR’s and these will be reviewed frequently with your line manager. As a senior member of the team, you may be required to mentor junior team members, contribute to technology discussions, support academic and AI community engagements, marketing events, and recruitment activities.
Contact
Find out more about Adarga: https://adarga.ai/careers
Adarga Benefits
- Hybrid working model available
- Adarga anywhere scheme offers you the chance to work wherever you like for part of the year
- Generous holiday package with an opportunity to buy and sell holiday
- Adarga share scheme available for all employees
- £1000 annual training budget
- Culture of knowledge sharing with Team Lunch and Learns
- Innovation Time- Collaborate with like-minded colleagues on projects that are important to you and the business
- Company MacBook
- Competitive Pension Scheme
Diversity & Inclusion at Adarga
Adarga recruits, trains, promotes & employs regardless of race, background, gender, colour, religion, disability, age, gender identity or any other protected status. In addition to this, we actively promote and encourage ersity of thought through multiple avenues. Adarga is dedicated to building and maintaining an inclusive environment where people feel valued, appreciated, and free to be themselves without prejudice.
Recruitment company statement
We approach everything with transparency and integrity. To avoid wasting anyone’s time, here is our policy on working with recruitment companies:
At present, we are not working with any new recruitment agencies, because we only want to work directly with inidual applicants who can contact our People Team through [email protected] Recruitment Privacy Policy

WHO WE ARE
Optimize Your Buildings and Infrastructure with Cartegraph.
Cities and Counties. Colleges and Universities. Parks, Utilities, and More.
Cartegraph builds safer, more resilient, and more sustainable communities through better stewardship of critical infrastructure. We design, build, and sell powerful, intuitive software solutions that help local government agencies, utilities, and educational institutions manage their physical assets and associated operations. With Cartegraph, users optimize the performance of their assets, drive efficiencies, maximize resources, and reduce costs through smarter, data-driven decisions.
JOB SUMMARY
As a Data Conversion Engineer, you will have personality to share and technical expertise to spare. We’re building and selling the most innovative operations management technology in the world, and we need someone to ensure data transformation and system integration for our clients is smooth and seamless. Interested?
THE WORK YOU WILL DO
- You’ll work full-time for great pay in a collaborative environment.
- You’ll take ownership of design, development and testing of all data transformation, extraction, and migration activities to the Cartegraph system.
- You’ll provide project delivery that leaves our users raving about the great experiences they have working with Cartegraph.
- Reviews technical publications, articles, and abstracts to stay abreast of technical developments in industry and works toward continuous quality improvement.
- Attends meetings as scheduled.
- Performs miscellaneous related duties as requested by management.
- Complies with all company policies and procedures
ADDITIONAL RESPONSIBILITIES
- Reviews technical publications, articles, and abstracts to stay abreast of technical developments in industry and works toward continuous quality improvement.
- Attends meetings as scheduled.
- Performs miscellaneous related duties as requested by management.
- Complies with all company policies and procedures
Requirements
WHAT YOU BRING TO THE TABLE
- You have a 4-yr. degree in an applicable field OR a 2-yr. degree with at least two years of relevant experience with data transformation.
- You know the ins-and-outs of T-SQL, MS SQL Server Management Studio, database structure and design methodology, data import techniques, and Microsoft Excel.
- Experience creating extract-transform-load capabilities using MS SQL Server, FME, python, or related technologies.
- You understand how to effectively discover and capture customer requirements as it relates to data conversion and other complex configurations.
- You’re familiar with data management and interoperability concepts and standards to help customers fully realize the value of their enterprise data.
- You’re an effective verbal communicator with the ability to explain complex issues clearly and simply.
- You’re an effective written communicator with the ability to build system design documentation.
- You’re a good listener and loyal advocate of customer success.
- You’re organized, you manage your time well, and you stand behind your work.
TRAVEL
Ability and willingness to travel 10%.
Benefits
We tried to narrow down to just 5 reasons to work here, but it was impossible...
- Opportunities to apply your skills and experiences with an established industry leader
- Options to grow your career with Cartegraph's Career Pathing program
- Collaborative work environment
- Co-workers with a passion for what they do
- Casual and flexible work environment
- Competitive pay and benefits
- 15 days of PTO in just your first year (whoa)
- Paid parental leave
- Wellness & EAP Program
- One week sabbatical every five years
- Paid holidays (plus a couple extra days)
- 401k plan with generous employer match
- Social events, coffee and snacks, foosball, and more..
Learn more about working at Cartegraph and our Midwest-Nice culture:
https://www.glassdoor.com/Overview/Working-at-Cartegraph-EI_IE335255.11,21.htm
In our efforts to recruit those who will elevate our success and ensure a proper career fit, we have a multi-faceted application process, As a part of our process, we invite you to complete the Culture Index Survey. The survey contains two questions and should take no more than 10-15 minutes. If you have questions, please contact [email protected].
Cartegraph is an equal opportunity employer. We celebrate ersity and are committed to creating an inclusive environment for all employees and do not tolerate discrimination of any kind. If you are unable to complete our online employment application or require an accommodation to do so, please call 563-587-3366 for assistance.

E&C has two openings for Data Analyst position - in Spain and in Melbourne, and you will join an international team working with energy data from all over the world.
As a Data Analyst you will work with ePoint - a unique web application developed by E&C which brings together and visualizes information on a global energy portfolio to the last detail. It holds information on their consumption, contract situation, hedging situation, risk management and on the energy markets. We are looking for an operator that will manage a set of client sites.
What will you do?
- Work with data to support E&C's contracting & risk management activities
- Deliver ePoint sites and reports
- Take responsibility of for correct and timely processing of energy prices, energy contracts and hedges of our clients around the world
- Make regular and random data quality checks
- Process and manage data of our clients in our SQL database
- Develop tailor-made reports/dashboards/graphs for our clients regarding their energy data portfolios, using Power BI
- Learn to become a specialist in different energy markets across the globe,
- Perform various data management activities relating to data reporting.
What do we offer you?
- A long-term and full-time contract
- Balance Days resulting in 46 days off / year
- Flexible hybrid/remote work mode possible after initial onboarding
- Challenging and interesting environment in the continuously evolving energy sector
- International work environment with more than 25 nationalities and English as main working language
- Flat company structure with agile way of working, offering autonomy and empowerment from day 1
- Intensive onboarding procedure with a buddy and an internal training program
- Regular discussions on your role(s) and competencies with your team leader, allow you to define a challenging career path that makes the best use of your talents and interests
- A competitive salary
What makes you a perfect match?
- Most of all, a can-do attitude and eagerness to learn
- 1-2 years of experience in data reporting or analytics is preferred
- Proficient excel and real interest in working with it every day , experience with PowerBi is a plus
- Business English
- Accuracy, being meticulous and detail-focused
- High sense of ownership


datadata🇺🇸
usa only🇺🇸
usa only
< class="h3">Company Description

M3 is the global leader in digital solutions in healthcare. We work with a range of sectors including healthcare, life sciences, pharmaceuticals, biotechnology and charities; services provided to these sectors include market research, medical education programs, promotional programs, clinical development, job recruitment and clinic appointment services. M3 has grown by more than 20% year-on-year for the past 15 years, achieving over 1 billion dollars in revenue annually.
Founded in 2000, with start-up investment capital from Sony, M3’s mission is to make use of the internet to increase, as much as possible, the number of people who can live longer and healthier lives, and to reduce, as much as possible, the amount of unnecessary medical costs. Key achievements of M3 Inc. include:
- Named in Fortune’s 2020 ‘Future 50’ list, ahead of Facebook and Amazon
- Ranked in Forbes’ 2020 Global 2000 list of the world’s largest public companies
- M3 is the only company incorporated after the year 2000 to be included in the Nikkei 225 Index
- Listed in Forbes’ ‘Asia’s 200 Best Over a Billion’ in 2019
- In 2020 M3 founded the ‘M3: Stop COVID-19 Fund’ and pledged one billion yen to support COVID-19 related initiatives
M3’s legacy lies in the power of trusted physician platforms. Around the world, M3 organizations leverage these highly engaged digital communities to deliver medical education, job placement, and market research, to improve global patient outcomes.
Because of our continued growth, we are hiring for a Database Administrator at NAS Recruitment Innovation, an M3 company.
< class="h3">Job Description- Manage databases (Azure SQL, MySQL, Mongo) in Production/QA/Dev environments including installation, configuration, upgrades, schema changes, etc.
- Provide best practices on design, availability, replication, security, audit, and backup/recovery methodologies. Continuously re-evaluate the infrastructure for opportunities for optimization
- Troubleshoot database issues and maintain database systems availability and scalability within production environments
- Perform capacity-planning exercises to properly identify required hardware, software, database configuration/architecture necessary to support application needs
- Experience (2-3 years or more) managing production database systems, preferably in a SaaS environment.
- Understanding of database packaged releases, patching strategy, and enforcing standards via tools and automation
- Experience with database server tuning, query optimization, replication strategies, storage engines, monitoring, capacity planning, backup, HA, etc.
- Experience in Analytics/BI/Data Warehouse, specifically Power BI/Azure Data Factory is a plus
- Ability to develop procedures, processes, and conventions to standardize database operations
- Understanding of RDBMS security administration
- Experience in scripting languages of your choice (PowerShell, Python, etc.) for monitoring and custom automation preferred
- Experience working with source code management tools (Git)
- Understanding of automated configuration management (Ansible, Terraform)
- Experience with Cloud based systems or Virtual environments (AWS, Azure, Google Cloud)
- Familiar with working in a GDPR/CCPA/HIPAA-compliant environment and how to design/ follow procedures as well as write documentation to maintain compliance preferred
- Previous Development/Operations experience a plus
Benefits:
A career opportunity with M3USA offers competitive wages, and benefits such as:
- Health and Dental
- Life, Accident and Disability Insurance
- Prescription Plan
- Flexible Spending Account
- 401k Plan and Match
- Paid Holidays and Vacation
- Sick Days and Personal Day
*M3 reserves the right to change this job description to meet the business needs of the organization


location: remoteus
Authorization Coordinator
at Headspace Health
Remote
Headspace and Ginger have recently merged to become Headspace Health! While roles are still being recruited separately on our respective websites, new hires from this point forward will be joining Headspace Health. For more information, please speak with your recruiter!
About the Authorization Coordinator role at Headspace Health:
As we grow our health plan partnerships and clinical volume, the Authorization Coordinator will play an important role to ensure insurance / EAP authorizations are in place – a key component to access clinical care, and successfully bill payers. This role will be responsible for proactively and accurately coordinating and obtaining information required in the authorization process. An enthusiastic, self-motivated and detail-oriented professionals with a desire for a fast-paced, high-growth environment would do well in this role.
This is a remote role in the US.
How your skills and passion will come to life at Headspace Health:
- Proactively coordinate all information required of the authorization process, ensuring complete, accurate and timely information collection and entry
- Efficiently obtain and track authorizations approvals from payers by phone, fax and electronic data exchange
- Accurately transcribe and verify fax authorizations to create eligibility files and electronic medical records
- Support the authorizations to billing pathway in submitting claims when needed
- Work claim rejections and denials related to authorizations
- Address questions from patients and other internal stakeholders (e.g. Member Support, Clinicians, etc.) regarding authorizations
- Effectively organize and manage workload to ensure timely completion
- Communicate responsively and positively with external partners and internal teammates
- Collaborate w/ clinical operations team as needed in support of access to care, and successful claims submission
- Meet weekly with manager and team to collaborate on inidual and team objectives and outcomes
What you’ve accomplished:
- BA/BS in Social Sciences or related field; or at least three (3) years of relevant experience in a health care physician billing environment
- Proficient working knowledge of G-Suite and organizational skills required
- Strong organizational, time management, and problem-solving skills
- Strong attention to detail, fast learner
- Strong interpersonal and written communication skills
- Comfortable with ambiguity and excelling in a scaling start-up environment
- Self-directed & motivated
- Genuine connection to Ginger’s mission and behavioral health
< class="h3">Company Description

As the industry leader in compensation management, Payscale is on a mission to help job seekers, employees and businesses get pay right, and make sustainable fair pay a reality. Empowering more than 53 percent of the Fortune 500 in 198 countries, Payscale provides a combination of data-driven insights, best-in-class services, and innovative software to enable organizations such as Angel City Football Club, Perry Ellis International, United Healthcare, Vista, and The Washington Post to make fair and appropriate pay decisions. Pay is powerful. To learn more, visit www.payscale.com
- Voted Seattle’s and Boston’s best places to work according to Built In 2022.
- Voted one of Seattle’s companies with the best benefits according to Built In 2022.
Does the opportunity to work with a truly unique compensation dataset that comprises nearly a third of the U.S. working population excite you? What about working on important issues like pay equity?
The Data team at Payscale works on a variety of different problems that have a direct impact on people's lives by improving compensation practices. We are a small team working on high impact projects that drive innovation in our customer facing products and across the organization. The Data team partners closely with data scientists, engineers, compensation professionals, product managers, and other internal stakeholders to design and develop the data sets that power our compensation tools.
What you’ll be doing:
- Serve as a project manager on larger and more complex updates and audits to our data sets. This can involve the creation of new international data sets
- Grow and maintain our job taxonomy by researching new and emerging jobs, writing job descriptions, and auditing existing jobs for improvements, consistencies, and reorganizations
- Work to improve our proprietary data sets by analyzing their structure similarities and differences and working to build crosswalks and translations between them
- Price jobs and conduct market analysis to determine competitive base and total compensation in multiple labor markets
- Audit existing and create new company job matches to our proprietary data sets
- Ensure the accuracy and quality of data sets by performing thorough and complex data analysis and validation
- Provide internal and external client support on all data sets including answering questions on data methodology, pulling sample data, compiling reports, creating data visualization dashboards, and joining client calls
- Work with cross functional teams to provide assistance on or insight to ongoing data analysis projects
- Understanding the ins and outs of our compensation datasets to become a subject matter expert on the various factors that influence pay
- Participate in client-focused working sessions to help our customers align their company jobs to the jobs offered in our data sets
- Prepare files and partner with our Database Administrators to update our database tables
Technology we use:
- Excel, SQL, Snowflake, Tableau
Requirements
- This is an experienced position where prior experience in compensation and data analytics is required
- A background in statistics, economics, mathematics, compensation, or other related fields
- Customer service, focused on problem solving and troubleshooting
- Experience pricing jobs and conducting market analysis in multiple labor markets
- Research and data analysis skills and mindset
- An aptitude for data and experience writing well structured SQL queries to get information from disparate datasets
- Experience with relational databases
- Excellent Excel skills and are comfortable with advanced functions including v-lookups, pivot tables, and experience writing macros is a plus
- Experience with data visualization – Tableau experience preferred
- Ability to juggle multiple projects
- Project management skills
- Ability to effectively communicate and share results with both technical and non-technical partners
- A desire to be the best, with a positive work attitude and inquisitive nature. You will be working in a highly flexible and fast paced environment
Benefits and Perks
The Highlights:
All around awesome culture where together we strive to:
- Pursue excellence every day
- Create customer value
- Compete to win (and lose!) as a team
An open and inclusive culture where you’ll learn and grow through programs and resources like:
- Monthly company all hands meetings
- Regular opportunities for executive leadership exposure
- Access to top-notch learning courses through LinkedIn Learning
- Regular manager check-ins to drive performance and career growth
- A growing network of employee resource groups
- And more!
Our more standard benefits
- A remote-first company culture where you’re able to work from your home, with company-provided equipment to set you up for success
- Discretionary Paid Time Off, giving you flexibility to rest, relax and recharge away from work.
- 15 Paid Company Holidays, including an extended Fourth of July Break, World Mental Health Day, and Juneteenth
- A comprehensive benefits plan including medical, dental, life, vision, disability, 401(k) covered up to 100% by Payscale
- Up to 12 weeks of paid parental leave, including coverage for pregnancy loss
- Flexible Spending Account options for pre-tax employee allocations
Equal Opportunity Employer:
We embrace equal employment opportunity. PayScale is committed to a policy of equal employment opportunity for all applicants and employees. It is our policy that employees will not be subjected to unlawful discrimination on the basis of race, color, religion, sex, age, national origin, or ancestry, physical or mental disability, veteran or military status, marital status, sexual orientation, political ideology, and any other basis protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including but not limited to: recruitment, hiring, transfers, promotions, training, discipline, termination, compensation and benefits, performance appraisals, education, and social and recreational programs.

KORE Software is the global leader in engagement marketing solutions comprised of KORE Software, Hookit, Sports Alliance and KPI (KORE Planning and Insights). Over 900 brands, venues, and sports organizations trust KORE's tools and platform as a source of truth that drives smarter partnership decisions and deepens connections with their audiences. Through Sponsorship Management and Evaluation, Ticketing & Fan Engagement, and Data Management & Analytics, KORE's two-sided network unites corporate sponsors, sponsorship properties, and their fans with solutions that accelerate and inform an organization's ability to act. This allows teams to enhance the fan experience, maximize returns, and spend time where it matters.
Role Overview
KORE Planning & Insights (KPI) is KORE's consulting practice providing key insights and expertise to the Sport & Entertainment industry in addition to our best-in-class software. KPI works with rights-holders, brands and agencies delivering integrated analysis and strategy on all aspects of sports business.
This team member will report directly to the Sr. Director Global Analytics & Strategy and will focus on projects for rights holders and brand clients. The focus will be in partnership analytics solutions (partnership valuation and evaluation) as well as advisory solutions (Go-to-market, partnership strategy). Furthermore, the role will work alongside KORE Software's insights, success, and product teams to identify opportunities to incorporate solutions into its suite of products, and work with the new business team to expand KPI's business.
The ideal candidate for this position will be self-motivated, able to meet deadlines, detail-oriented, data-focused, and have a strong ability to work well in a team-based setting with internal and external stakeholders. They will be expected to take ownership of client projects, assume responsibility for delivering great work and the ability to act as a trusted advisor to our clients. This will require strong project management, excellence in presentation building and delivery, as well as an in-depth understanding of sponsorship and marketing in the sports & entertainment industry They're also expected to demonstrate the ability to build data visualizations (Tableau/PowerPoint), and show the technical aptitude to expand their SQL scripting skills.
Duties and Responsibilities
Client Strategy
Partnership Advisory
- Leverage our deep relationships and knowledge base with agencies, brands, rights holders, and consumers across the globe to create strategies that drive both brand value and business results for our clients and their partners.
- Make informed recommendations to clients based upon their objectives, market trends, measurement, value drivers, potential opportunities etc.
- Facilitate the work to inform the strategy for key partner programs, and identification of growth opportunities to be used in renewals and negotiations.
Partnership Analytics
- Combine KPI's unique insights and technological expertise with additional research tools to inform strategy, partnership evaluation, and value associated with ROI & ROO.
- Provide partners with dynamic measurement and reporting across key metrics, and demonstrate performance of partners across a variety of data types and sources.
- Help implement the data analytics vision for business decision making through Tableau dashboard design for clients.
- Leverage data and reporting to drive greater adoption into customer software products
Account Management
- Leads the day-to-day management of select clients, with a focus on working directly with client team members, executing deliverables, measuring/analyzing data points, and providing strategic recommendations to both internal and external stakeholders.
- Ensure that projects are produced according to SOW, on-time, and are aligned with client expectations and budgets.
Departmental Development
Strategy & Offerings
- Identify offerings that KPI can provide clients & prospects to optimize commercial potential, and maximize current partnerships
- Work closely with KORE's success and sales organizations to understand current needs from both software and consulting clients, and design scalable product offerings to address client needs Monitor trends in the sports, entertainment, and technology landscapes to keep team members and executives up to date, and identify new opportunities.
Revenue Generation
- Design custom solutions for clients based on KPI's current offerings, along with the technical aspects needed to support them. Comfortable with how solutions can work within, and without the KORE Software suite of products.
- Works with sales and success team members to develop presentations and SOW's
- Lead presentations of proposals to clients both in written formats as well as face to face presentations
- Work with marketing team to develop internal education and sales material to be leveraged by the customer success and sales teams
Qualifications
- Bachelor's Degree required, MBA preferred.
- 3+ years of business strategy, consulting, or agency experience (preferably in the sports/entertainment industry)
- 1+ year(s) experience with data visualization tools (Tableau, PowerBI, Looker)
- Experience with SQL, SAS, R, Python, and/or another statistical programming language a plus
- Proficient in Microsoft Suite, particularly with PowerPoint, Word and Excel
- Strong organizational, analytical, and technical skills
- Knowledge of sponsorship and the sports industry
- Ability to communicate professionally in both a written and verbal context with external end-users and internal colleagues
We Provide
- Medical, vision, and dental insurance
- Peer to peer recognition program
- Cell Phone Allowance
- 401K Matching
Salary Range: $90,000 to $100,000 (dependent upon experience)

< class="h3">Company Description

We are Vericast. We create meaningful connections between business and the people that they serve-how, when and where it matters. By pushing the boundaries of data and insights, we spark discovery and inspire action to create profitable results.
< class="h3">Job DescriptionThe API Platform Engineer – Data Mesh is a critical contributor to developing and growing a new data architecture and framework for scalable analytics and innovation called Data Mesh. This new architecture and framework enable an organization to build valuable datasets and make them discoverable, safe, reliable, and reusable by applying software product thinking to the job of making datasets easy and fun to use. This role will build the key architectural and feature components for the data mesh, including but not limited to search, catalog, data API and documentation, data lineage, and user access. Additionally, the project includes implementing the orchestration, transformation, and data movement of the datasets involved in the data mesh architecture.
Day to day responsibilities will include:
- Developing the architectural components of the Data Mesh using multiple frameworks, services, and languages such as Python, Java and JavaScript.
- Developing data pipelines using both commercial and/or open-source data pipeline tools such as Apache Kafka, Apache Airflow, Confluent, AWS Data Pipeline or AWS Glue.
- Developing microservices, authentication and authorization capabilities.
- Developing and documenting APIs using commercial, full API lifecycle management tools and best practices.
- Build unit test cases for data products and execute them for coverage.
- Contribute to a cross functional Agile Scrum team using Agile Scrum best practices, ceremonies, and artifacts.
- 3-5+ years of hands-on application engineering experience, preferably in a cloud or hybrid cloud environment.
- Extensive experience building APIs & micro-services using techniques like REST, OpenAPI, Swagger, and Functions (AWS Lambda) is required.
- Working knowledge of API security certification, authentication, authorization, IP security setup, and end point configuration is preferred.
- Programming languages including Java, JavaScript, Python required.
- Experience building and documenting APIs with high availability and performance standards is required.
- Experience with APIs feeding frontend web applications is preferred but not required.
- Experience working in an Agile Scrum team is required.
- Experience building data pipelines and data orchestration systems is preferred.
Vericast offers a generous total rewards benefits package that includes medical, dental and vision coverage, 401K matching and flexible PTO. A wide variety of additional benefits like life insurance, employee assistance and pet insurance are also available, not to mention smart and friendly coworkers!
At Vericast, we don’t just accept differences - we celebrate them, we support them, and we thrive on them for the benefit of our employees, our clients, and our community. As an Equal Opportunity employer, Vericast considers applicants for all positions without regard to race, color, creed, religion, national origin or ancestry, sex, sexual orientation, gender identity, age, disability, genetic information, veteran status, or any other classifications protected by law. Applicants who have disabilities may request that accommodations be made in order to complete the selection process by contacting our Talent Acquisition team at [email protected] or 616-541-3145.
#LI-LC1
#DICE

< class="h3">Company Description
< class="h3">Job Description

Choose challenge. Choose pace. Choose growth.
Our Tech department is the driving force behind one of the most successful start-ups in Europe since 2012. Based in Berlin, we have an innovative and driven team that is using cutting-edge technologies to redefine the used car market.
You will be part of a committed and enthusiastic team of Data Scientists, working with statistical modeling and dealing with challenging problems. As a Data Scientist at AUTO1 Group (which includes wirkaufendeinauto.de, AUTO1.com and Autohero), you have the opportunity to work on advanced topics (pricing, recommendation systems, etc) that utilize your machine learning techniques. You will build machine learning models and deploy the models in production environments to drive AUTO1 business forward.
Together with business analysts and production teams, you will help to revolutionize the process and user experience of how people sell and buy cars online and make a real impact on the market.
Are you already located in the EU and looking for a remote position? If we have an entity where you are, you are good to go. Otherwise we are happy to assist you during your relocation to our Berlin headquaters!
Your role
- Translate business needs into practical data science questions, exercise your expert knowledge to deliver practical solutions, and brings positive impact
- Handle large amounts of data to drive data-driven business decisions
- Partner with engineering and product teams to drive actionable data insights
- Set priorities and support junior members to achieve business goals
- Identify business opportunities, collaborate with stakeholders and partners in tech to drive business growth
Your skills
- Advanced statistical and quantitative knowledge with practical experience, we welcome candidates with academic experience in a quantitative research field
- Demonstrated communication and visualization skills that enable you to present findings in a clear and unambiguous way
- Hands-on experience with a broad range of machine learning techniques and the ability to identify the appropriate technique in different scenarios
- Experience in effective collaboration, you are a natural team player and communicate fluently in English
- Proficient in SQL and Python. Experience in AWS/Azure/Google Cloud is a plus
Our offer
- Your work-life balance and working from home are part of our culture, so we offer you working from home options, flexible working hours and 30 days of vacation
- Relocation support to Germany includes visa assistance, apartment search, and help with costs
- Educational budget for your personal growth
- Team and company events like Hackathons, International Nights, Company Party, Football and more
- More than 90 different nationalities make up for a truly international and erse working environment
Apply by uploading your CV.
Contact: Igor Popov
AUTO1 Group is Europe's leading digital automotive platform. As a dynamic tech company, we are revolutionizing the automotive industry with our brands wirkaufendeinauto.de, AUTO1.com and Autohero. Our strong team of 5.000 people is dedicated to making Europe-wide car trading and transport as fast and stress-free as possible for our customers. Join us, building the best way to buy and sell cars online.
At AUTO1 Group we live an open culture, believe in direct communication, and value ersity. We welcome every applicant; regardless of gender, ethnic origin, religion, age, sexual identity, disability, or any other non-merit factor.
< class="h3">Qualifications < class="h3">Additional Information
< class="h3">Company Description
< class="h3">Job Description

We’re looking for a Senior Data Reliability Engineer to join Procore’s Data Division. In this role, you’ll help build Procore’s next generation construction data platform for others to build upon including Procore developers, analysts, partners, and customers.
As a Senior Data Reliability Engineer, you’ll partner with other engineers and product managers across Product & Technology to develop data platform capabilities that enable the storage and management of data for use in transactional and analytical applications. To be successful in this role, you’re passionate about distributed data storage systems on the cloud with experience in relational and non-relational databases. If you’re a seasoned engineer comfortable and excited about building our next generation data platform and translating problems into pragmatic solutions that open up the boundaries of technical possibilities—we’d love to hear from you!
This position reports into the Senior Manager, Foundational Data Platform and can be based in any of our US offices or remote. We’re looking for someone to join us immediately.
What you’ll do:
-
Participate in the design and implementation of our next generation data platform for the construction industry, primarily focused on building and supporting distributed databases
-
Actively participate with our engineering team in all phases of the software development lifecycle, including requirements gathering, functional and technical design, development, testing and roll-out, and support
-
Design and implement self-service tools for developers to more easily deploy and manage databases for their services
-
Help identify and propose solutions for technical gaps in our data platform by running proof of concepts and experiments
-
Work alongside our Product, UX, and IT teams, leveraging your experience and expertise in data systems to influence our product roadmap, developing innovative solutions that add additional capabilities to our tools
-
Help uplevel teammates by conducting code reviews, providing mentorship, pairing, and training opportunities
-
Stay up to date with the latest data technology trends
What we’re looking for:
-
Bachelor’s Degree in Computer Science or a related field is preferred, or comparable work experience
-
2+ years of experience provisioning and operating databases at scale like Postgres, Elasticsearch, DynamoDB, Neo4j, Mongo, Cassandra, etc.
-
5+ years of experience building and operating cloud-based, highly available, distributed, and scalable databases with large amounts of data
-
Experience and proficiency managing infrastructure as code
-
Experience and proficiency in one or more scripting or programming languages (Python, Shell Scripting, Java, Ruby, Go, etc.)
-
Experience implementing and supporting data lakes and streaming technologies (Kafka) is a nice-to-have
-
Strong passion for learning, always open to new technologies and ideas
If you'd like to stay in touch and be the first to hear about new roles at Procore, join our Talent Community.
About Us
Procore Technologies is building the software that builds the world. We provide cloud-based construction management software that helps clients more efficiently build skyscrapers, hospitals, retail centers, airports, housing complexes, and more. At Procore, we have worked hard to create and maintain a culture where you can own your work and are encouraged and given resources to try new ideas. Check us out on Glassdoor to see what others are saying about working at Procore.
We are an equal opportunity employer and welcome builders of all backgrounds. We thrive in a erse, dynamic, and inclusive environment. We do not tolerate discrimination against employees on the basis of age, color, disability, gender, gender identity or expression, marital status, national origin, political affiliation, race, religion, sexual orientation, veteran status, or any other classification protected by law.
Perks & Benefits
You are a person with dreams, goals, and ambitions—both personally and professionally. That's why we believe in providing benefits that not only match our Procore values (Openness, Optimism, and Ownership) but enhance the lives of our team members. Here are just a few of our benefit offerings: generous paid vacation, employee stock purchase plan, enrichment and development programs, and friends and family events.

< class="h3">Company Description

Etsy is the global marketplace for unique and creative goods. We build, power, and evolve the tools and technologies that connect millions of entrepreneurs with millions of buyers around the world. As an Etsy Inc. employee – whether a team member of Etsy, Reverb, Depop, or Elo7 – you’ll tackle unique, meaningful, and large-scale problems alongside passionate coworkers, all the while making a rewarding impact and Keeping Commerce Human.
< class="h3">Job DescriptionWhat’s the role?
Etsy's millions of buyers and sellers generate petabytes of data that we use for data science, experimentation, marketing, and powering seller and buyer experiences. The Data Engineering team's work helps Etsy continuously increase the quality of these experiences by providing powerful datasets and services.
The Data Engineering team is looking for an enthusiastic and motivated Engineering Manager to lead the Batch Engines team in our Data Platform department. This department owns all the infrastructure and developer tools that enable other engineers and data scientists to write data pipelines. The Batch Engines team is responsible for the infra and tools around our core batch processing engines: Spark and Dataflow. Opportunities include improving reliability, efficiency, as well as improving the developer experience by making things like environment setup, interactive exploration, job optimization, monitoring, and testing easy for our internal users.
This is a full-time position reporting to the Engineering Director of Data Platform & Infrastructure.. For this role, we are considering candidates based in [country] who are either remote, flex, or office-based. Etsy offers different work modes to meet the variety of needs and preferences of our team. Learn more about our flexible work options and vaccination policy here.
What’s this team like at Etsy?
- Our team is dedicated to building software that reduces toil, increases access to data, and respects privacy.
- We seek to build scalable systems that are flexible but maintainable.
- Our Big Data stack includes Spark, Kubernetes, Python, Java/Scala, and we’re users of Google Cloud Platform, particularly Dataproc, Dataflow, and BigQuery.
What does the day-to-day look like?
- You are responsible for attracting, mentoring, supporting and retaining a team of top-notch engineers who design, build and support Etsy’s Big Data infrastructure.
- You drive the technical and strategic vision of the team, and help translate day-to-day issues into coherent and robust systems.
- You collaborate with external (sometimes non-technical) stakeholders and customers throughout the company.
- You plan and prioritize the team’s road map, and champion team projects internally.
- Of course, this is just a sample of the kinds of work this role will require! You should assume that your role will encompass other tasks, too, and that your job duties and responsibilities may change from time to time at Etsy’s discretion, or otherwise applicable with local law.
Qualities that will help you thrive in this role are:
- You have a passion for data related services, and understand good systems design.
- If you have specific experience building and supporting platforms and pipelines for data processing, that is a plus, as is any experience with cloud services.
- If you have experience building platforms or tools where the main users are other engineers or data scientists, that is a plus.
- You are comfortable translating sometimes ambiguous business problems into plans that our engineers can act on and feel passionate about.
- You care about facilitating learning and growth on the team - you put our engineers first.
- You are thoughtful about all the ways an engineering manager can have an impact.
What's Next
If you're interested in joining the team at Etsy, please share your resume with us and feel free to include a cover letter if you'd like. As we hope you've seen already, Etsy is a place that values iniduality and variety. We don't want you to be like everyone else -- we want you to be like you! So tell us what you're all about.
Our Promise
At Etsy, we believe that a erse, equitable and inclusive workplace furthers relevance, resilience, and longevity. We encourage people from all backgrounds, ages, abilities, and experiences to apply. Etsy is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. If, due to a disability, you need an accommodation during any part of the interview process, please let your recruiter know. While Etsy supports visa sponsorship, sponsorship opportunities may be limited to certain roles and skills.
For U.S. roles only:
Many Etsy roles are open to remote candidates, and you'll be able to identify which ones within the location header of each job description. We're open to remote hires from all U.S. states except Hawaii and Alaska. For candidates who will work remotely from Colorado, visit this link for information related to Colorado's Equal Pay for Equal Work Act.

< class="h3">Company Description

We are a multinational team of iniduals who believe that with the right knowledge and approach, technology is the answer to the challenges that business face today. We have been bringing this knowledge and approach to our customers since 2016, helping them translate technology into their success.
With Swiss roots and our own development team in Lima and throughout the region, we offer the best of both cultural areas: the talent and passion of Latin American professionals combined with Swiss organization skills and mindset.
< class="h3">Job DescriptionWe are looking for a Snowflake Data Engineer to work 100% remotely.
< class="h3">Qualifications- Only for residents in Latin America
- At least +6 years of experience as a Data Engineer
- 5+ years ETL/SQL experience
- Hands on 1-2 years of experience with Snowflake/SnowSQL
- AWS experience preferred
- Advanced English level
Nice to have:
- Informatica PowerCenter

< class="h3">Company Description

We are a multinational team of iniduals who believe that with the right knowledge and approach, technology is the answer to the challenges that business face today. We have been bringing this knowledge and approach to our customers since 2016, helping them translate technology into their success.
With Swiss roots and our own development team in Lima and throughout the region, we offer the best of both cultural areas: the talent and passion of Latin American professionals combined with Swiss organization skills and mindset.
< class="h3">Job DescriptionBertoni Solutions is looking for a Senior Azure Data Engineer to work 100% remotely from Latin America. We hope for great experience, fluent English and high interest in working remotely for US clients.
< class="h3">Qualifications- 8 years of experience
- Microsoft Azure: Databricks, ADF, ADLS, Synapse, etc.
- PySpark
- Spark
- Experience with Azure Data Factory
- Experience in EDW design with Synapse
- Strong SQL knowledge
- Desirable: Scala
- Advanced written and conversational English

< class="h3">Company Description

CoEnterprise is an award-winning B2B software and professional services company headquartered in New York City. Founded in 2010, CoEnterprise delivers Supply Chain and Business Analytics solutions and services that transform how companies connect and do business. CoEnterprise approaches each relationship and engagement from the perspective of three core values: collaboration, ownership, and excellence. We value collaboration with both our partners and clients in order to present the best possible outcome for our customers. Our vow to accept ownership ensures that our entire staff takes pride in our work and it is our commitment to excellence that ensures that this work is at the highest standard possible.
< class="h3">Job DescriptionResponsibilities
- Elicit, understand and restate complex business challenges related to realizing an organization’s advanced analytics data strategy
- Define and visualize data architecture patterns for analytics solutions, modeling both AS-IS and TO-BE data architecture topologies for both on-prem and the cloud
- Identify, propose and justify data warehousing, data modeling and analytics architectures for BI, data science, ad-hoc query analysis, data sharing and application development
- Synthesize customer analytics challenges into solutions for the Snowflake Data Cloud
- Establish confidence in recommendations via product expertise, custom product demonstrations, technical phone calls, RFP/RFI responses, product roadmap discussions, architectural topology options & business process diagrams
- Articulate and support selected recommendations consultatively and with a business-minded awareness of advancing a deal
- Partner with sales to deliver on revenue plan
- Demonstrate and advise in our core Analytics platforms including:
- Snowflake
- Tableau Desktop, Alteryx, Dataiku or equivalent BI tools
- Alteryx and/or related data preparation tools
- Demonstrate, advise and make solution recommendations similar to the following tools and skillsets:
- DataRobot, Dataiku, Databricks, Spark and related data science tools
- Python, ML/AI technology options and best practices
- The current ecosystem of data warehousing solutions including those on Azure, AWS and GCP
- API integrations
- Communicate and champion the methods, processes, and certifications which make CoEnterprise a leading provider of Cloud Analytics Services
- Engage with both internal teams and customers in a consultative and approachable manner
- Design and deliver presentation materials within established content and style parameters
Professional Skills
- Proficient in delivering software demonstrations in-person and virtually
- Proven experience working with employees at all levels of an organization
- Comfortable developing and presenting solutions
- Experience creating technical business documentation like workflow diagrams, proposals, SOWs, RFPs and RFIs, etc.
- Structured and methodical approach to creating and maintaining notes, deliverables, statements of work and other work artifacts in accordance with team standards
- Strong verbal and written communication skills
- Comfortable prioritizing and managing multiple, often competing, workstreams effectively.
- Must be a continually curious, committed, and efficient learner of new business and technology skills, highly responsive to emerging sales requirements
Other
- Willingness to travel 45% or more as needed
- 3+ years' prior experience within a mid-market or Enterprise level consulting, delivering, or selling SaaS solutions and concepts
- 3+ years building analytics solutions in the cloud, including design and delivery of data lakes, data warehouses and data marts
- 2+ years working with the Snowflake Data Cloud
- Advanced SQL skills
- Proficient coding skills in at least one of the following: Python, JavaScript, R or other data science language
- Demonstrable experience with Enterprise-class Analytics software systems like Tableau, Alteryx, and Snowflake
- Familiarity implementing solutions in at least two of the following cloud providers: AWS, Azure, Google Cloud, and IBM Cloud
- Familiarity w/system integration methods such as web services, SOAP APIs & REST APIs
- Familiarity with Advanced Analytics Applications
Come experience our spirited culture and work with a smart, dedicated and high-energy team in a stable and fast-growing company! Here is a small sample of our benefits and perks we offer:
- Comprehensive Health Insurance with generous employer contribution
- Matching 401(k) - $$$$
- Generous PTO Policy
- Virtual Team Lunches
- Wellness Program
- Monthly Mingles
- Birthday Celebrations
- Virtual Events- Happy Hours, Casino Night, Magic Show, Scavenger Hunt of National History Museum, Game Nights and more
At CoEnterprise, we believe ersity drives innovation. We are committed to creating and maintaining a workplace in which all employees have an opportunity to participate and contribute to the success of our business. In recruiting for our team, we welcome the unique contributions that you can bring. We value employees for their differences represented by a variety of dimensions including demographics, behaviors, work style and perspectives.
We are an AA/EOE employer.


location: remoteus
Administrative Coordinator (Remote)
Job Locations: US-Remote
Requisition ID: 2022-78712
# of Openings: 1
Job Function: Clinical
Job Schedule: Regular Full-Time
Job Introduction
Maximus is currently hiring for an Administrative Coordinator. In this position, you will be performing administrative tasks to ensure compliance with all contract level requirements. This is a fully remote position with a salary range of $17-22/houly, pay is based on overall experience and qualifications.
Job Description Summary
Perform administrative tasks to ensure compliance with all contract level requirements
Job Summary
Essential Duties and Responsibilities:
- Provide customer support to internal and external customers
- Responsible for assigning and coordinating referrals for contract work to appropriate parties
- Computer data entry
- Perform all job duties in compliance with Person First standards, HIPAA guidelines, and company confidentiality policies and procedures.
- Complete assignments within established compliance standards and timelines
- Monitor multiple work queues daily to ensure cases move quickly through each process stage.
- Identify and resolve data errors
- Performs other related duties as assigned.
- Excellent written and verbal communication skills
- Excellent interpersonal and customer service skills
- Proficient in Microsoft Office Suite
- Excellent organizational skills and attention to detail
- Ability to work in a fast-paced environment
- Ability to work independently
Minimum Requirements:
- High School Degree or equivalent and 0-2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
Education and Experience Requirements
Minimum Requirements:
- High School Degree or equivalent and 2 years of relevant experience, or Associate Degree
- Clinical office experience preferred
- Self-starter with sense of urgency and the ability to work in fast-paced, complex, and deadline-driven environment
- Strong organizational skills including time management, calendar management, scheduling, project management, records and filing and using digital resources
- Ability to complete assignments with attention to detail and a high degree of accuracy
- Strong interpersonal skills including tact, diplomacy and flexibility to work effectively with all members of the organization
- Ability to work as a team member, as well as independently
- Demonstrated ability to communicate information clearly and accurately both verbally and in writing
- Ability to exercise judgment and discretion with highly sensitive and confidential information
- Proficient with Microsoft Office (intermediate level in Excel, PowerPoint, Sharepoint, and Microsoft Outlook)
MAXIMUS Introduction
Since 1975, Maximus has operated under its founding mission of Helping Government Serve the People, enabling citizens around the globe to successfully engage with their governments at all levels and across a variety of health and human services programs. Maximus delivers innovative business process management and technology solutions that contribute to improved outcomes for citizens and higher levels of productivity, accuracy, accountability and efficiency of government-sponsored programs. With more than 30,000 employees worldwide, Maximus is a proud partner to government agencies in the United States, Australia, Canada, Saudi Arabia, Singapore and the United Kingdom. For more information, visit https://www.maximus.com.
EEO Statement
EEO Statement: Active military service members, their spouses, and veteran candidates often embody the core competencies Maximus deems essential, and bring a resiliency and dependability that greatly enhances our workforce. We recognize your unique skills and experiences, and want to provide you with a career path that allows you to continue making a difference for our country. We’re proud of our connections to organizations dedicated to serving veterans and their families. If you are transitioning from military to civilian life, have prior service, are a retired veteran or a member of the National Guard or Reserves, or a spouse of an active military service member, we have challenging and rewarding career opportunities available for you. A committed and erse workforce is our most important resource. Maximus is an Affirmative Action/Equal Opportunity Employer. Maximus provides equal employment opportunities to all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status or disabled status.
Pay Transparency
Maximus compensation is based on various factors including but not limited to a candidate’s education, training, experience, expected quality and quantity of work, required travel (if any), external market and internal value analysis including seniority and merit systems, as well as internal pay alignment. Annual salary is just one component of Maximus’s total compensation package. Other rewards may include shortand long-term incentives as well as program-specific awards. Additionally, Maximus provides a variety of benefits to employees, including health insurance coverage, life and disability insurance, a retirement savings plan, paid holidays and paid time off. Compensation shall be commensurate with job duties and relevant work experience. An applicant’s salary history will not be used in determining compensation.
What we are looking for –
A passionate, hungry, and motivated inidual that is eager for a chance to join a young startup, experiencing rapid growth. At OmniData, we are searching for a remote Senior Azure Data Engineer that has experience working on data warehousing and analytics projects, a strong technical aptitude, and the ability to provide direction for clients on the best strategies for their analytics goals. In return, we offer deep mentorship, a great work/life balance, and the opportunity to be part of creating a consulting firm that makes a difference for our clients!
What you will do –
You will work on various Big Data, Data Warehouse and Analytics projects for our world class customers. In addressing complex client needs, you will be integrated into appropriately sized and skilled teams. This will give you the opportunity to analyze requirements, develop data and analytical solutions, and execute as part of the project team, all while working with the latest tools, such as Azure Synapse Analytics and related Microsoft technologies.
Your Duties and Responsibilities –
- Contribute collaboratively to team meetings using your experience base to further the cause of innovating for OmniData clients.
- Instill confidence in the client as well as your teammates
- Work independently toward client success, at the same time knowing your own limitations and when to call on others for help.
What you must have to be considered –
- 2-3+ years of experience in Analytics and Data Warehousing on the Microsoft platform
- 2-3+ years working with Microsoft SQL Server
- Experience working with the Microsoft Azure stack (e.g. Synapse, Databricks, Data Factory, etc.)
What would be nice for you to have –
- Experience with Python
- Experience gathering requirements and working within various project delivery methodologies
- Experienced working as a customer facing consultant
- Exposure to DAX
- Strong communication skills tying together technologies and architectures to business results
- Some travel may be required (up to 20%) Post COVID 19
Benefits and Perks –
- Competitive salary and benefits commensurate with experience
- Mentorship from highly regarded industry specialists
- Exposure to the latest and greatest Microsoft technologies
- High growth potential for those with an entrepreneurial spirit.
About OmniData –
OmniData is a Portland based Data and Analytics consulting firm leveraging the Microsoft technology stack to help organizations build their Modern Data Estates, designed to serve their digital innovation needs for many years to come. To do this, we apply deep experience in Solution Architecture, Data, Analytics, and Technology to simplify the complex.
OmniData is offering you the opportunity to work with the entire lifecycle of large Data Projects, focused on next generation data warehousing, with surface points to Analytics, Machine Learning and AI. We offer a collaborative work culture, that enables you to produce client results with a safety net from your team. You will get to work closely with very experienced consultants who will be able to provide mentorship and career guidance. At the same time, you will be rewarded for learning fast and executing within our teams to provide solutions for OmniData clients
OmniData Is An Equal Opportunity Employer And All Qualified Applicants Will Receive Consideration For Employment Without Regard To Race, Color, Religion, Sex, National Origin, Disability Status, Protected Veteran Status, Or Any Other Characteristic Protected By Law.

Ness Digital Engineering, provides strategic IT consulting to global enterprises. Our DevOps and Infrastructure practice provides solutions, methodologies, and strategic guidance for digital transformation, containerization, and automation. Our Financial Services team offers strong domain expertise and technology acumen to deliver feature-focused solutions in Capital Markets.
We solve complex business problems with technology and insight. Our business domain knowledge, technology expertise, and Agile delivery process have delivered seamless Digital Transformations at some of the largest customers globally. We’re an AWS Premier Consulting Partner, a Premier Confluent Systems Integrator and a Snowflake Select Services Partner.
As a Data Engineer you will:
- Serve as an expert technologist in implementing ETL data pipelines, streaming data solutions, data lakes and data warehouses
- Work on the architecture, design, implementation, and testing of advanced data solutions for Ness clients
- Exhibit expertise in data modeling, data warehouses, data lakes, and building ETL data pipelines
- Modernize our clients’ data platforms, transitioning to cloud-hosted solutions using AWS, Azure, GCP, and Snowflake
Requirements
- Strong relational database skills including SQL, data modeling, and
- Experience using ETL / ELT tools such as EMR, Fivetran, Informatica
- Experience with data collection, data cleansing, and ETL processes
- UNIX/Linux skills including shell scripts and basic system administration
- Programming skills using modern programming languages like Python, Java, or JavaScript
- Cloud experience strongly preferred using AWS (Redshift, RDS, EMR, Glue) or Azure (Synapse Analytics, Azure SQL)
- Experience with data visualization tools such as PowerBI or Tableau
- Excellent verbal and written communication skills
- Ability to manage multiple projects simultaneously
Additional Desired Skills:
- Programming languages including Java, Scala, C++, C#, JavaScript, R
- Experience with software test automation
- Data warehouse technologies such as Snowflake, AWS Redshift, and/or Azure Synapse Analytics
- Experience with AWS ML technologies such as SageMaker
- Experience with big data analytics tools such as Spark or DataBricks
- Experience with streaming data analytics, Kafka, and/or Kinesis Streams
Education and Certification Requirements:
- An undergraduate degree is usually required, preferably in a STEM discipline.
- AWS or Azure certifications desirable (Solution Architect, Machine Learning or Big Data specialty certification
- Snowflake SnowPro certification desirable
Benefits
- Flexible work environment with a globally distributed team
- Competitive compensation packages including performance bonuses
- Paid vacation and sick time off
- Employer-subsidized medical, dental, and vision insurance
- Company-paid short- and long-term disability insurance
- A culture of cooperation and support
- Continual professional and personal development through employer-paid training and certifications

About the project
Spate is the machine intelligence platform used by top industry beauty brands. We analyze over 20 billion search signals to spot the next big beauty trend and help brands with their marketing/product development strategies.
As we expand to new Asian markets (Japan), we are looking to hire a Data Operations Manager to help build and manage our expanding datasets. In this role, you will be in charge of overseeing the Spate data expansion and entry process. This role will require a strong eye for detail, and a strong passion for organization and project management. We would be looking at 10 to 20 hours a week (we can be flexible depending on your availability).
Responsibilities
- Oversee data process and quality assurance for each vertical/market
- Manage relationships with the data entry team
- Analyze datasets and investigate discrepancies or inconsistencies
- Curate interesting and unique trends for Spate content; brainstorm compelling topic ideas for upcoming reports
Requirements
Minimum qualifications
- 1-3 years of experience
- Exceptional verbal and written skills
- Meticulous and organized, with a high level of attention to detail
- Proven problem-solving skills using deductive reasoning, understanding hierarchical relationships, and identifying gaps in logic
- Demonstrated project management skills and ability to manage multiple priorities
- Self-starter and ability to work independently
Preferred qualifications
- Experience in SEA/SEM and SEO
- Or Experience in CRM
- Or Experience in Copywriting
Benefits
About Spate
At Spate, we use data science to predict the next big consumer trend in beauty, personal care & food.
Spate was founded in 2018 by Yarden Horwitz & Olivier Zimmer, two ex-Googlers who led the trendspotting ision at Google and uncovered trends such as turmeric, cold brew, and face masks. Spate has been funded by the prestigious Y Combinator incubator and Initialized Capital. We currently have ~90 clients in the U.S., mainly in the beauty space from direct-to-consumer brands to big names such as L’Oréal, Estée Lauder, Unilever...
As two ex-Googlers with a passion for using data to spot new patterns in consumer behavior, and we have made it our mission to build the world’s greatest consumer trends prediction platform of all time. And not just because we want to be trendy, but because we want to help brands get better at giving consumers what they really want.
Brands waste over $200BN every year due to product launch failures and inventory waste. By spotting Turmeric, we were able to tell brands to stop wasting money on kale products and provide consumers with glorious golden milk lattes instead - because that’s what consumers want.
How do we do this? We tap into publicly available consumer data (anonymous and aggregated) to identify interesting shifts in consumer behavior. We leverage the latest available technology in ML to solve problems in ways that have never been explored before.
Why Spate?
- Join a well-funded company that is working with the top brands in consumer goods
- Work directly with the founders to set the direction of the company
- Grow in a fast-paced environment
- Always be up-to-date on the latest trends!
We enjoy a casual atmosphere, but our culture is about getting things done. We are passionate yet pragmatic when it comes to solving problems in a fast-paced environment. Our standards are high, but we thrive on working with people we respect and can learn from. We’re flexible on work styles, as long as everyone is getting their work done - and getting it done well.
We are an equal opportunity employer where our ersity and inclusion are central pillars of our company strategy. We look for applicants who understand, embrace, and thrive in a multicultural and increasingly globalized world. We do not discriminate based on race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Here at Hugging Face, we’re on a journey to advance good Machine Learning and make it more accessible. Along the way, we contribute to the development of technology for the better.
We have built the fastest-growing, open-source, library of pre-trained models in the world. With over 100M+ installs and 65K+ stars on GitHub, over 10 thousand companies are using HF technology in production, including leading AI organizations such as Google, Elastic, Salesforce, Algolia, and Grammarly.
About the Role
As a data engineer for vision datasets, you will work on a 3-6 months project to catalyze progress in computer vision for the open-source and research community.
The project will deal with:
- analyzing publicly available vision datasets,
- providing better access to selected datasets within the 🤗 Datasets library,
- improving vision data pre-and post-processing features within the 🤗 Datasets library,
- evaluating state-of-the-art computer vision systems on a variety of vision/image datasets.
During your project, you will closely work with the vision community. The goal is to catalyze research in computer vision by making image preprocessing as easy as possible for as many datasets as possible, as well as providing reproducible baselines for state-of-the-art computer vision systems and empowering the vision community to improve current dataset documentation practices.
About you
You'll love this internship if you are passionate about current trends in computer vision and view sharing your work with the research community as a necessity.
You should be well-versed in Python, have some experience in image preprocessing, and not be (too) afraid to process multiple terabytes of image data on a daily basis. Experience with some tabular data libraries, e.g. Apache Arrow, as well as open-source contributions and the ability to communicate feature requests to a erse open-source community are a plus! It is advantageous if you are comfortable working remotely as most of our collaborations are conducted in a remote setting.
We encourage students enrolled in university (Ph.D., Master, or Bachelor), data scientists, and ML/Data engineers looking for new opportunities to apply for this internship.
More about Hugging Face
We are actively working to build a culture that values ersity, equity, and inclusivity. We are intentionally building a workplace where you feel respected and supported—regardless of who you are or where you come from. We believe this is foundational to building a great company and community, as well as the future of machine learning more broadly. Hugging Face is an equal opportunity employer, and we do not discriminate based on race, ethnicity, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or ability status.
We value development. You will work with some of the smartest people in our industry. We are an organization that has a bias for impact and is always challenging ourselves to grow continuously. We provide all employees with reimbursement for relevant conferences, training, and education.
We care about your well-being. We offer flexible working hours and remote options. We offer health, dental, and vision benefits for employees and their dependents. We also offer parental leave and unlimited paid time off.
We support our employees wherever they are. While we have office spaces in NYC and Paris, we're very distributed, and all remote employees have the opportunity to visit our offices. If needed, we'll also outfit your workstation to ensure you succeed.
We want our teammates to be shareholders. All employees have company equity as part of their compensation package. If we succeed in becoming a category-defining platform in machine learning and artificial intelligence, everyone enjoys the upside.

< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👋 A little about us

GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.
GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Analyst, you will:
- collaborate with stakeholders (Go-To-Market and Product) to define KPIs and success metrics to maximize Company and Team level performance
- discover and explain trends across data sources, potential opportunities for growth or improvement
- being a top contributor to our data warehouse using Dbt and BigQuery
- being able to perform advanced product analysis using Amplitude
- design and develop comprehensive dashboards to provide self-serve analytics solutions to different stakeholders across the business
- create trainings and documentation to ensure adoption of our data solutions and recommendations
- shaping the dynamics and processes of the Data team, including our onboarding, hiring, and team goals
- having a full understanding of the data stack
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to get insights from product analytics data
- your ability to jump into any GitBook members shoes to understand their needs
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🌍 Location
Remote
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 Join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)

< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👋 A little about us

GitBook is a modern documentation platform. Our ambition is to empower teams through a new document standard suited for modern work and collaboration.GitBook is now used by over 2M users and thousands of teams such as DeliveryHero, Netflix, Decathlon, or Celonis. With close to 25,000 sign-ups per month and 10% revenue growth month-over-month, we're looking to expand our Data team to actively contribute to our success.
< class="h3" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 40px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">Who uses GitBook?Some cool data applications use us to power their public documentation: Census, Castor, OpenMetadata and many more! (Snyk) We also have great organisations using us to build their internal knowledge base: Netflix, Decathlon or Adobe.
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🤔 Why are we opening this position ?
You will be joining Rémi, our Head of Data, to spread insights across the company. As a product-led company with high growth, we strongly believe that data should be involved in any decision process: from product to sales.
On the team philosophy, we want to get people interested in working on any part of the data journey: data engineering, data analytics and data science. It doesn't mean being an expert in all three fields but we believe that any data folks should aim to understand and cover most of the scope.
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🙌 What will you be doing?
As a Data Engineer, you will be involved in:
- own our data stack to maintain and improve it
- bring best practices from software engineering to empower the data team
- being a top contributor to our growing data warehouse using Dbt and BigQuery
- being interested in solving business questions more than setting infrastructure
- empower everyone in the company by making data available to them
< class="h2" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 56px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🛠 What tooling environment will you be working on ?
Data stack:
DBT, BigQuery, Tableau, Amplitude, Segment, Airflow, Cloud Functions, Zapier, Python, Stitch, GitHub
Main 3rd party system:
Hubspot, Stripe, Firestore
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">🫶You will be valued for:
- your ability to build and maintain any data layer, as you have past experience in the data enginery field
- your curiosity and passion in the data field that pushed to work on any data pillars
- your ability to build and improve our data warehouse models using Dbt
- your determination to make data the heart of GitBook success (evangelist part)
- your ability to successfully collaborate on cross-team work in a remote and async environment
- bonus: your knowledge of the B2B SaaS industry allowing you to quickly understand our challenge
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">✨ What's next?
First, we will take the time to review your application and we will get back to you within a week, regardless of our decision. We know that your time is valuable so we work to move the process along quickly and keep it casual. We're not believers in "gotcha" questions or checking for skills you'll never actually use at GitBook.Here's what our process will look like:
- Meeting with Rémi, our Head of Data (60 min) to look for healthy alignment
- Technical exercise (60-90 min) to deep e on your technical skills
- Meet the Founder call (45 min) with Samy, co-founder
- Meet the team (60 min) with a team member to confirm that you will thrive in our culture and answer any question about what it's like working here
< class="h1" style="border-width: 0px; border-style: solid; border-color: rgb(238 239 242/var(--tw-border-opacity)); border-image: initial; box-sizing: border-box; --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin: 0px 0px 8px; line-height: 1.2; color: #252525; background-color: #ffffff;">👥 join GitBook
You will be joining during a pivotal moment for GitBook. We've enjoyed great success since we were founded, and now we're taking conscious steps to take our company to the next level. That means you will have the opportunity to build, positively impact the trajectory of the company and enjoy the benefits of helping grow our company 20x. Every single team member is a value addition to our culture, so it's important for us to state our company values:
- 🚀 Ambition (Aim higher)
- ✊ Ownership (Take control and Own it)
- 📈 Accountability (Be accountable to results)
- 🙋♀️ Care (Give a sh*t)
- 🏃♂️ Train (Grow yourself)
- 🤝 Genuine (Say it and accept it)
- 👯♂️ Team player (Leverage the team)
- 🏗 Architect (Plan & Build)

Who We Are
TetraScience is the Scientific Data Cloud company with a mission to accelerate scientific discovery and improve and extend human life. The Scientific Data Cloud is the only open, cloud-native platform purpose-built for science that connects lab instruments, informatics software, and data apps across the biopharma value chain and delivers the foundation of harmonized, actionable scientific data necessary to transform raw data into accelerated and improved scientific outcomes. Through the Tetra Partner Network, market-leading vendors access the power of our cloud to help customers maximize the value of their data.
What You Will Do
- Own, prototype, and implement customer solutions
- Research and prototype data acquisition strategy for scientific lab instrumentation
- Research and prototype file parsers for instrument output files (.xlsx, .pdf, .txt, .raw, .fid, many other vendor binaries)
- Design and build data models
- Design and build Python data pipelines, unit tests, integration tests, and utility functions
- Work with the customer to test and make sure the solution fulfills their requirements and solves their need
- Coordinate project kickoff meetings; manage the customer relationship throughout the project, and conduct formal project closeout meetings
- Facilitate internal project post-mortems to identify areas of improvement on the next implementation
Requirements
What You Have Done
- 2+ years in Python and SQL
- Passionate about science and building solutions to make the data more accessible to the end-users
- Undergraduate or graduate degree in chemistry, biology, computer science, statistics, public health, etc.
- Wet lab experience or experience with scientific instruments is a strong plus
- Excellent communications skills, attention to details, and the confidence to take control of project delivery
- Quickly understand a highly technical product and effectively communicate with product management and engineering
- Strong problem-solving skills
- Intellectually curious: Unwavering drive to learn and know more every day
- Ability to think creatively on how to solve projects risks without reducing quality
- Team player and ability to "roll up your sleeves" and do what it takes to make the team successful
Benefits
- 100% employer-paid benefits for all eligible employees and immediate family members
- Unlimited paid time off (PTO)
- 401K
- Flexible working arrangements - Remote work + office as needed
- Company paid Life Insurance, LTD/STD
No visa sponsorship is available for this position

Research shows that women and underrepresented groups only apply to jobs only if they think they meet 100% of the qualifications on a job description. IMO is committed to considering all candidates even if you don’t think you meet 100% of the qualifications listed. We look forward to receiving your application!
Work that is meaningful. A job that has impact. Colleagues that inspire. That’s what you’ll find at Intelligent Medical Objects (IMO), a growing health IT company creating clinical terminology and insights solutions that are used by more than 740,000 US physicians and 4,500 US hospitals to power better patient care and support meaningful analytics.
The Business Analyst will be responsible for the end-to-end process related to his or her features, including developing functional and non-functional business requirements, understanding how requirements impact both the technical
and functional nature of an application and collaborating with engineering to build features according to business needs. This BA/PO is responsible for prioritizing the work of the software engineering team to ensure the most valued functionality. This role will require someone who understands technology and business, a high degree of organization and attention to detail, and the ability to professionally interact with a erse blend of personalities to reach resolution and maintain strong relationships.
< class="h3">Responsibilities

- As the Business Analyst, lead an agile team to deliver forecasted sprint goals, and solve problems efficiently and completely, according to principals of scrum development.
- Work closely with stakeholders to create and maintain a product backlog according to business value or ROI.
- Lead team sprints and road mapping processes
- Assess value, develop cases, and prioritize stories, epics, and themes to ensure work focuses on those with maximum value that are aligned with product strategy
- Provide vision and direction to the Agile development team and stakeholders.
- Keep abreast with Agile/Scrum best practices and new trends
- Experience owning a product delivery cycle and delivering software solutions.
- Manage and respond promptly and professionally to defect reports. Aid support personnel as needed to determine system problems.
- Take responsibility for and lead new initiatives in content accuracy and quality.
- Focus the team towards utilizing standards/metrics that provide guidance and feedback.
- Oversee current state process capture, identify, recommend and implement process re-engineering to support automation.
- Responsible for delivering ad hoc projects to support leadership, working with the product managers on strategic initiatives.
- Ensure that all content released to IMO Clients meets our high standards and expectations.
- Acquire working knowledge of IMO terminology solutions and related technology tools.
- At least 3 years of software Business Analyst working with an Agile team or equivalent experience.
- Proficient in gathering business requirements, process flows and use cases.
- In-depth knowledge of Agile process and principles.
- Proficiency in Microsoft Office applications and experience with JIRA/Confluence/Aha! Software a plus.
- Outstanding communication, presentation, and leadership skills.
- Sharp analytical and problem-solving skills.
- Excellent planning, organizational, and time management skills.

The Last Mile (TLM) is looking for a Senior Manager of Research and Analytics dedicated to supporting TLM's mission through organizational learning and growth. This person will be responsible for the development of TLM's research and evaluation processes and projects across in-prison and reentry programs. The Senior Manager of Research and Analytics will ensure that a consistent culture of reflection and learning exists throughout the organization to inform the current and future iterations of our programming.
< class="h2">Responsibilities:- Lead all efforts in designing and managing our internal program performance measurement work
- Analyze all indicators and outcomes to help leadership team make appropriate strategic and programmatic decisions
- Work with department leads to ensure that all data is correctly gathered and inputted on a regular schedule by all programming staff
- Implement any other necessary evaluation methods as appropriate (e.g., personal development assessment surveys, focus groups, photo journaling, etc.)
- Research best practice implications based on evaluation findings
- Develop partnerships with external evaluators (e.g., university research partners) and other organizations that can help advance our evaluation and research capacities, serving as the liaison with such collaborators
- Conduct research and help develop recommendations related to various policy opportunities aimed at improving outcomes for justice impacted folks
- Translate and disseminate evaluation results (e.g., writing policy briefs and publications) to help shift policy discussions based on promising and best practices revealed
- Lead the process of creating and facilitating necessary trainings for staff regarding data collection and management.
- Prior background in evaluation work required
- Experience designing and implementing Randomized Control Trials or other large-scale research projects preferred
- MA or equivalent work experience in research-related field
- Experience with Salesforce administration preferred
- Excellent oral and writing skills
- Strong organizational skills and demonstrated high-level strategic thinking and planning
- Embrace challenges and new responsibilities with creativity, initiative, and self-direction
- Experience in justice reform or non-profit settings preferred
- Empathy, passion and understanding of TLM's mission, pillars, and community interests

Since its founding, Intuition Machines has been on the forefront of innovation, leveraging and developing new technologies to solve complex problems. Our team, composed of leading researchers and developers, are constantly innovating toward an improved future fueled by the promise of privacy, security, and performance. We work in a casual and fast-paced environment, with a team distributed around the world, hundreds of millions of users, and a rapidly growing customer base and product suite.
Join us as we transform security and machine learning online.
As a Lead Data Engineer you will be responsible for technical leadership of data engineering projects. You will do that by designing and improving high throughput data pipelines, promoting best practices in terms of high performance data processing, infrastructure setup and development process. You will have the ability to shape the data engineering capabilities for state of the art, large scale security and machine learning products.
< class="h3">What will you do:
- Lead the data engineering initiatives and projects
- Design, document and build scalable data infrastructure
- Collaborate with software engineers, ML engineers, product managers and growth teams
- Set up data quality, monitoring and alerting infrastructure
- Ensure high performance and availability of our data infrastructure
- Proven experience in designing and implementing end to end data solutions (+5 years of experience)
- Strong python programming skills with emphasis on clean, readable and testable code
- Experience with high throughput data systems and streaming architectures
- Experience working with Kafka infrastructure and applications
- Solid understanding of OLAP databases (preferably Clickhouse)
- Hands on experience with Kubernetes
- Familiarity with public cloud providers (AWS or Azure)
- Familiarity with security frameworks, attack vectors, botnets
- Experience working with IaC and GitOps solutions
- Experience with monitoring, observability and data quality tools
- Experience with exploratory data analysis and data science solutions
- Ability to work with cutting-edge technology
- Fully remote position with flexible working hours #Li-remote
- An inspiring team spread all over the world
- A unique chance of being a part of #hCaptcha revolution

< class='"content-intro"'>
< class="h3">About us:

RevenueCat makes building, analyzing and growing mobile subscriptions easy. We launched as part of Y Combinator's summer 2018 batch and today are handling more than $1.2B of in-app purchases annually across thousands of apps.
We are a mission driven, remote-first company that is building the standard for mobile subscription infrastructure. Top apps like VSCO, Notion, and ClassDojo count on RevenueCat to power their subscriptions at scale.
Our 50 team members (and growing!) are located all over the world, from San Francisco to Madrid to Taipei. We're a close-knit, product-driven team, and we strive to live our core values: Customer Obsession, Always Be Shipping, Own It, and Balance.
We’re looking for a Staff Data Engineer to join our newly formed data engineering team. As a Staff Engineer, you will be responsible for leading the effort to design, architect and support our entire data platform and will play a key role in defining how our systems evolve as we scale.
< class="h3">About you:- You have 8+ years of software engineering experience.
- You have 5+ years of experience working with and building enterprise-scale data platforms.
- You have excellent command of at least one of the mainstream programming languages and some experience with Python.
- You have helped define the architecture, data modeling, tooling, and strategy for a large-scale data processing system, data lakes or warehouses.
- You have used workflow management tools (eg: airflow, glue) and have experience maintaining the infrastructure that supports these.
- You have hands-on experience building CDC-based (Change Data Capture) ingestion pipelines for highly transactional databases. Experience with Postgres and logical replication is a plus.
- You have a strong understanding of modern data processing paradigms and tooling, OLTP & OLAP database fundamentals.
- Dimensional modeling and reporting tools like Looker are a plus, but not required
- You have experience building streaming/real-time data pipelines from a batch architecture approach.
- Help define a long-term vision for the Data Platform architecture and implement new technologies to help us scale our platform over time
- Help the team apply software engineering best practices to our data pipelines (testing, data quality, etc)
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources, using SQL and AWS technologies
- Clearly define data ownership & responsibility, audit and compliance framework, and general security of the data lake
- Partner with product managers, data scientists, and engineers across teams to solve problems that require data
- Drive the evolution of our data platform to support our data processing needs and provide frameworks and services for operating on the data
- Analyze, debug and maintain critical data pipelines
- Work with our core infrastructure team to create and improve frameworks that allow derived data to be used in production environments
- Contribute to standards that improve developer workflows, recommend best practices, and help mentor junior engineers on the team to grow their technical expertise
- Get up to speed on our architecture and learn the problem domain
- Understand our current data requirements and where things stand today
- Gain understanding of our current data pipelines
- Work with your team to help design and architect our data platform
- Work with product managers, engineers and data scientists to help come up with a plan to gain consensus on the approach
- Analyze, debug and maintain critical data pipelines
- Develop thorough understanding of our data platform
- Know all the major components of our system and be able to debug complex issues
- Be able detect bottlenecks, profile, and come up with enhancements
- Start participating in hiring for the company
- Thoroughly understand our data processing needs and able to spec, architect, and build solutions accordingly
- Mentor other engineers joining the team
- $218,000 to $245,000 USD salary regardless of your location
- Competitive equity in a fast-growing, Series B startup backed by top tier investors including Y Combinator
- 10 year window to exercise vested equity options
- Fully remote work environment that promotes autonomy and flexibility
- Suggested 4 to 5 weeks time off to recharge and focus on mental, physical, and emotional health
- $2,000 USD to build your personal workspace
- $1,000 USD annual stipend for your continuous learning and growth

Data Entry – Quality Assurance Specialist
Location: US National – Virtual
Full-Time
The Data Entry-Quality Assurance Specialist in our Customer Boarding Department is responsible for reviewing merchant customer data in multiple databases to identify any inconsistencies that need to be corrected for newly boarded accounts. Success in this role requires a strong attention to detail while working in a fast paced environment.
A Quality Assurance Specialist in our Customer Boarding Department is responsible for maintaining data integrity while adhering to company policies and practices. This position is accountable for completing detailed data analysis of newly boarded customer accounts and identifying inconsistencies in company CRM that need to be corrected. Success in this role requires an energetic, solution solving inidual with strong attention to detail while consistently meeting service levels. This position is key in building customer loyalty and ensuring revenue integrity.
What You’ll do:
- Review newly approved merchant account details to ensure and maintain data integrity within NAB systems, Global systems and/or First Data systems
- Verify that key data information is accurate in all systems matching the merchant application
- Identify and execute corrections for inaccurate information timely and accurately
- Daily support of the Customer Boarding call queue
- At a high level, manage and organize productivity through the effective use of all available resources including database systems and query reports
- Provide positive customer experiences while maintaining a high degree of ethical behavior in all aspects of daily busines
- Build and maintain strong working relationships with all NAB and TMS employees and departments by keeping a positive attitude and a collaborative focus in all interactions
- Adhere to company policies as defined
- Accept and complete assignment with open, cooperative, positive, and team oriented attitudes
- Perform special projects as assigned
What we Need from you:
- High School Diploma or G.E.D.
- Payment industry experience preferred
- Strong attention to detail
- Ability to communicate feedback, information, and directions both verbally and written
- The ability to work at speed and with accuracy
- Excellent organizational skills, multi-tasking and prioritization in a fast-paced work environment
- Thorough understanding of rates and fees preferred
- Strong customer service skills
- Flexibility in work schedule to accommodate business needs
- Ability to problem solve and de-escalate upset customers
- Inspiring and positive attitude
< class='"content-intro"'>

At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
At Health IQ, our vision is to ensure that the 1.5B seniors live their golden years better than the previous generations. We believe in rewarding the health conscious through savings, literacy, and educational tools.
We are a erse and innovative group of iniduals who thrive on big data and proven results. Our approach has enabled us to grow from roughly 200 to 900+ employees over the last year and we expect continued growth and opportunities. If you believe that being health conscious can improve lives and want to make a tangible difference through your work, then you’ll love what we’re doing at Health IQ – apply and join the team!
Data Scientist
Health IQ has set upon itself to completely change the way seniors choose their healthcare plans by using AI/ data science, and world class user experience to bring transparency, objectivity, and intelligence to the insurance purchasing process. We need a Data Scientist to help this initiative end to end and establish Health IQ as a leader in the digital insurance market.
At Health IQ, Data Scientist uses predictive analytics and innovative machine learning models to create value from data. This role is at the heart of finding and proving innovative solutions and is responsible for developing and driving strategic modeling initiatives while maintaining a close partnership with IT to ensure that our models can be deployed quickly and monitored in a flexible deployment framework.
As a Data Scientist, you will serve as a technical and thought leader on this erse and highly skilled team. You will design and develop inventive solutions to drive innovation and the delivery of organizational value. You’ll synthesize large datasets and solve complex problems by using advanced machine learning and statistical modeling. You’ll work in a highly collaborative, team environment, guiding and mentoring junior data scientists and collaborating with multiple stakeholders. You will assist management in the communication of insights and the implementation of impactful data science solutions across the organization.
You will deliver actionable insights from your models that can be incorporated into existing Health IQ products and new programs. The ideal candidate for this role will have a passion for creating solutions, an attitude of creativity, and continual learning.
What you will be doing:
- Build core analytical models that drive Health IQ digital insurance products.
- Bring core domain expertise about Medicare, health insurance and population insights.
- Communicate complex quantitative analyses in a clear, precise, and actionable manner to management and executive-level audiences while building relationships with their partners
- Collaborate with business leaders to understand business opportunities and formulate analytical solutions for problem-solving, working alongside other analytic iniduals and team
- Design innovative algorithms and machine-learning approaches for handling some of the most challenging and exciting datasets in today’s insurance industry
- Provide thought leadership on the practical application of machine learning and advanced analytical methods and cultivate a data-driven culture across the company
- Deliver clean, reusable, and scalable code
- Work closely with Data & Engineering to deploy models
What we’re looking for:
- Master’s Degree in computer science, Math, Statistics, Economics or in any technical field that provides a solid basis for analytics is required. Masters with relevant experience acceptable
- 2+ years of experience in data science, statistics, computer science, or mathematics where you designed, developed, evaluated, and deployed predictive modeling, machine learning, and advanced analytics
- End to end experience from data wrangling to model deployment delivering added value with varying levels of ambiguity
- Extensive experience solving analytical problems using quantitative and qualitative approaches especially related to Medicare, healthcare insurance plans, and/ or senior focused population insights.
- Experience with state-of-the-art techniques in machine learning algorithms, including deep neural networks, NLP, dimensionality reduction, ensemble methods, graph algorithms
- Excellent communication skills and experience in working with stakeholders
- Strong prioritization skills while being dynamic and agile
- Ability to advise one or more areas, programs, or functions
To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
< class='"content-conclusion"'>To make the world a healthier place, we started in our backyard. We created a health-conscious environment that allows each of our employees to reach their personal health goals. Below are a few of the employee-led programs that make working at Health IQ truly unique.
- Career Growth
As a rapidly growing company, new opportunities for growth and development continue to become available. We believe in promoting from within, and look to reward high performing employees with new opportunities.
- Celebration
We believe the key is to celebrate those who have improved their health rather than cajole those who haven’t. We look for employees who take this positive and optimistic view in their work lives.
- Service to Seniors
Our whole mission and vision is to serve seniors to improve their health. We want employees who believe true happiness comes from being in service to others. We call these employees Health Heroes.
- Personal Responsibility
We believe that only you can make the decision to improve your own health and no one else can do this for you. We look for employees that tend to do the same.
- Excellent benefits
Competitive rates for our employees' costs toward medical, dental and vision insurance. We offer a 401K, and pay 100% of your life insurance benefit option! We also offer various Flexible Spending Account (FSA) benefits to meet you and/or your families needs. Only full-time employees are eligible for benefits.
- Join a Remote-first Culture
Our flexible, totally remote environment allows us to hire top talent throughout the U.S. The world has changed, and we’ve learned that being in an office is no longer the best way for our employees and our company to thrive.

We’re building a better health system! At Nice, we’re making healthcare accessible by delivering integrated primary, musculoskeletal, and mental health care to patients when they want it through a combination of in-home and virtual visits while also improving the quality of care by eliminating the complexity, poor management, and time constraints that hold clinicians captive.
Building a better health system for all requires the input and perspectives of all. Nice actively seeks a mixture of beliefs, backgrounds, education, and point of view to help us drive better, more informed design and business decisions. Nice is committed to building a erse, inclusive, and equitable workforce and we diligently provide equal employment opportunities for all applicants and employees.
Product
The Product team at Nice Healthcare has an exciting opportunity for a Staff Data Analyst. In this role, you’ll wrangle our data, nurture our data analysis program, and partner with leaders to interpret and apply data that will drive us forward. As a Product team - made up of product managers, designers, researchers, and data analysts - we believe in succeeding as a team and setting measurable goals to guide our work together. We will cultivate your career through the investment of time and materials in the discovery of your career path.
What you’ve done before:
- You have significant experience with healthcare data and tools, including medical records, claims datasets, ICD-10 codes, CPT codes, and groupers (more than 3 years)
- You are familiar with the role of data on product teams, and in a startup or midsize business
- You have strong expertise in at least Python or R, as well as SQL
- You excel at your role by seeking to understand the business and its users
- You want to work on a wide range of problems and questions
What you’ll do at Nice:
- Work with medical record and claims datasets to accurately gage the efficacy of various care models and interventions
- Become the caretaker of our existing datasets - keep them tidy and easy to leverage
- Transform and mine our data in support of key questions and decision points
- Nurture the development and iteration of our KPIs and other important metrics
- Collaborate with our internal engineers to capture data thoroughly and thoughtfully
- Build clean and informative dashboards and visualizations
- Spend time learning from and alongside other leaders at Nice
- Proactively derive and communicate insights to stakeholders
- Support key product decisions by investigating and representing data throughout the software development lifecycle
- Define best practices around how we capture data, how we organize it and how we use it
- Ensure the privacy and security of our data in collaboration with IT leadership
- Consult on methods of data collection and experimentation
- Actively mentor other analysts on the team
- Support the direction and prioritization of other analysts’ work (pulling reports, visualization, monthly reporting preparation).
What Nice offers you:
- 100% remote work environment -- work from anywhere in the U.S.
- Company paid Medical, Dental, Vision and Life Insurance
- Competitive salary
- 25 Days of PTO – that we actively encourage you to use
- Two "No Meeting" days every week
- Growth and development opportunities
- Personal enrichment & wellness stipend
- 401k with a 3% employer contribution
- Personal and family use of Nice Healthcare (in eligible cities)
- The nicest team members and work environment
- And much more!
About Nice Healthcare
Nice Healthcare is a technology-enabled full-service primary care clinic without a physical location that treats our patients in the comfort of their homes with in-person visits or online video calls.
We foster an open and supportive company culture that values the input and ideas of all team members no matter their role. We are an innovative company in that we are revolutionizing the way patients receive primary care services and we don’t settle for the status quo - we are always implementing new processes and technology to make our work more efficient and productive.
We are committed to building a workforce that is erse and inclusive. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.


location: remoteus
Title: Data Entry Specialist
Location: United States
JOB SNAPSHOT
- Employee Type: Full-Time
- Location: Work From Home
JOB DESCRIPTION
- Remote positions available – temporary
- $17.00 per hour
Founded in 1980, MultiPlan is the industry’s most comprehensive provider of healthcare cost management solutions. We provide the most comprehensive portfolio of cost management solutions; helping payers manage the cost of care.
We are seeking multiple temporary associates for an Intake position.
- This is a temporary position which is expected to last 60 to 120 days.
- The expected start date is August 16, 2022.
- A training class is provided during the first week of employment. New employees will need to work 8:00 to 4:30 pm CT during the training.
- After training, there is flexibility in the work schedule
Responsibilities:
- Intake and create cases in all applicable systems.
- Perform timely data entry of necessary information
- Research appropriate systems to identify data needed to complete cases.
- Ensure compliance with HIPAA regulations and requirements.
- Demonstrate Company’s Core Competencies and values held within
- Please note due to the exposure of PHI sensitive data – this role is considered to be a High Risk Role.
- The position responsibilities outlined above are in no way to be construed as all encompassing. Other duties, responsibilities, and qualifications may be required and/or assigned as necessary.
JOB REQUIREMENTS
- High School diploma or equivalent.
- Ability to efficiently use a keyboard and quickly navigate software applications.
- High speed internet access.
- Quiet work area without distractions.
- Regular and consistent attendance and adherence to work schedule.
- Knowledge of medical insurance terminology preferred.
- Communication skills (verbal, written, listening).
- Ability to work without frequent supervision.
- Ability to maintain confidentiality in all required situations.
- Ability to use software, hardware, and peripherals related to job responsibilities.
Our client's Database Engineers are relied on to build the future of our direct-to-home service delivery platform. As a part of our Engineering department, based in beautiful Provo, UT, this role requires the ability to move quickly, think deeply and work well with others on your Agile Scrum team: engineers, UX, product owners, and stakeholders. As a Senior Database Engineer, you will engineer, code, and test resilient, highly-scalable database systems that support both our web applications as well as backend APIs for our mobile apps servicing our hundreds of thousands of customers. We are significantly building out the engineering teams at Aptive and are seeking talented coders who love growth to be a part of this expansion.
Responsibilities include:
● Work experience in writing complex SQL queries in MySQL and in building stored procedures and views.● Manage different databases through multiple product lifecycle environments, from development to mission-critical production systems.● Configure and maintain database servers and processes, including monitoring of system health and performance, to ensure high levels of performance, availability, and security.● Apply data modeling techniques to ensure development and implementation support efforts meet integration and performance expectations.● Independently analyze, solve, and correct issues in real-time, providing problem resolution end-to-end.● Refine and automate regular processes, track issues, and document changes● Assist developers with complex query tuning and schema refinement.● Build ER diagrams and help develop and refine DB standards, code reviews, and release processes.● Provide support for critical production systems.● Perform scheduled maintenance and support release deployment activities after hours if required.● Capacity Planning and delivering robust and scalable databases.● Backup & Restore databases, ensuring a well-defined and tested disaster recovery strategy.● Share domain and technical expertise, providing technical mentorship and cross-training to other peers and team members.● Solve technical problems by working closely with Agile scrum masters, UX, product managers, and remote teams.● Review and correct code for quality and design.● Design high-performance database schema and code architecture.● Empower remote teams to deliver stable, high-performing, and reliable code. This requires regular interactions with remote teams in the mornings.● Contribute significantly to sprints, meet sprint deadlines, help other team members with their sprint commitments and take the lead on urgent tasks.Required Qualifications:
● 7+ years of experience writing SQL queries and performance tuning.● Experience required in designing, modeling, and implementing database DDL and DML.● Working knowledge of database and architecture best practices.● Strong written and verbal communication skills.
Benefits:
● Medical, Dental, and Vision Benefits.● Group Health, Dental, and Vision plans.● Paid holidays.● Paid time off.● Access to a full-sized indoor basketball court, game room with theater, pool table, golf simulator, and more.● Upbeat and exciting company culture and much more!
We are a US software development company delivering high-quality, cost-effective custom application development to clients worldwide. As a technology consulting company, we also help our clients with their digital transformation process.
Currently, we are seeking a Teach Data Lead:
What You Will Be Doing:- Identifying data sources, both internal and external and working out a plan for data management that is aligned with organizational data strategy.
- Developing and implementing an overall organizational data strategy that is in line with business processes. The strategy includes data model designs, database development standards, implementation and management of data warehouses and data analytics systems.
- Identifying data sources, both internal and external, and working out a plan for data management that is aligned with organizational data strategy.
- Coordinating and collaborating with cross-functional teams, stakeholders, and vendors for the smooth functioning of the enterprise data system.
- Managing end-to-end data architecture, from selecting the platform, designing the technical architecture, and developing the application to finally testing and implementing the proposed solution.
- Planning and execution of big data solutions using technologies such as Hadoop. In fact, the big data architect roles and responsibilities entail the complete life-cycle management of a Hadoop Solution.
Your Profile Includes:
- Knowledge of the following data tools: Airflow, Postgre Aurora, Fivetran.
- Experience working with Python, AWS and Apple Search Ads.
- Experience generating date file into internal format using Data Pipeline Infrastructure.
- Ability to implement common data management and reporting technologies, as well as the basics of columnar and NoSQL databases, data visualization, unstructured data, and predictive analytics.
- Understanding of predictive modeling, NLP and text analysis, Machine Learning (Desirable).
Work Breakdown
- Ingestion: Implement the data pipeline from Fivetran source (Postgres Aurora) to internal file generation.
- Load: Implement the data pipeline from the generated internal file during ingestion to loading into the client environment's datastore. Below are the files that need to be loaded.
- Data source tables and related infrastructure preparation.
- Feature implemented as per requirements and as per Engineering Excellence guidelines.
- The implementation must follow documented playbook for integrating media sources through Fivetran.
- All code must pass CICD pipeline, including python linting, black formatting, and 100% test coverage for functional code using patterns.

Your work makes the difference between a user seeing a bus drive away or reaching it just in time. We feel a great sense of responsibility at Citymapper. Millions of users around the world trust our green app to be on time for work, a job interview or a date.
As one of our Data Analysts you will build the timetable data that powers the Citymapper app and help launch new cities. You'll use secret magic (and our internal tools) to transform often messy transit schedules into shiny and reliable user information whilst building, validating, and shipping data directly to users on a daily basis.
Working with our engineers to improve tools and automate tasks, you'll manage data in a range of French and European cities across different formats.
This role is designed for French-speaking contractors based anywhere in the world, but with an obsessive knowledge and adoration for public transport networks.
We are a erse team of transport enthusiasts from all around the world with extensive language skills, who are not afraid to get our hands dirty with transit data. Join us for the ride!Requirements
The position is open to applicants with all levels of experience as we'll teach you the technical skills to succeed. You'll need:
- A true passion for public transport and cities, good understanding of how public transport networks operate.
- A technical mindset, comfortable dealing with data, willingness to learn new data skills.
- A hands-on, proactive, practical, pragmatic attitude.
- An exceptional attention to detail with good organisational skills.
- Some familiarity with common transit data formats or data-wrangling is a plus (GTFS, TransXChange, Siri, Hafas, JSON, XML etc)
- French and English are a must, any other European language a plus.
Benefits
- Contractor position in a remote-first team.
- Working on something interesting and meaningful - helping to make cities usable.
- Working with a not-too-big, erse engineering team.
- Arcane public transport knowledge with which to dazzle your friends.
We believe that erse teams are the best teams and we're proud to be an equal opportunities employer. We welcome and will consider all applications regardless of age, disability, gender re-assignment, marriage, pregnancy, maternity, race or nationality, religion or belief, sex and sexual orientation (and any other status protected by applicable law)


Hi!
We are Genesis Growth Accelerator. We are building a unique model of working with promising B2C IT products: we invest in projects at early stages, scale-up, and help to build successful companies that serve millions worldwide.
Over 100 mln people across the world have already used our products and many more are yet to come.
Our mission is to transform Ukraine from an outsourced hub into a product state. The state, where ideas are born, developed, and owned from the first $1 of revenue to a unicorn IPO.
We are now looking for a Business Data Analyst, who will increase the capitalization of businesses at an early stage.
RESPONSIBILITIES:- Hypothesis generation to create, scale and optimize business growth levers across a wide product range;
- Planning, coordination, and analysis of marketing and product A/B tests;
- Support in the development of analytical solutions for mobile and web products across Genesis Growth Accelerator.
HARD SKILLS:
- Knowledge of SQL/Excel;
- Knowledge of Python (NumPy, Pandas, Matplotlib / Seaborn, functional & OOP principles);
- Understanding the concept (architecture) of DWH class systems;
- Confident application of key statistical and probability theory concepts;
- Basic understanding of key product metrics for mobile applications;
- Advanced level English knowledge.
WOULD BE A PLUS:
- Experience with Tableau/PowerBI;
- Experience with REST API and HTTP API;
- Experience with Google Cloud Platform (Cloud Storage, BigQuery);
- Knowledge of mobile marketing intelligence products including their capabilities and limitations (SimilarWeb, Sensor Tower, Appannie, etc.).
SOFT SKILLS:
- Optimism. You can't wait to see the results of the 10th hypothesis test after the previous 9 have failed;
- Communication. You don't have a problem with telling stories starting from the end.
Join our team of dreamers, doers, and global changemakers!


datadataspainspain
Who we are:
We're a global marketing service provider and we specialize in affiliate marketing & publishing. We are digital natives, data obsessed and focused on measurable outcomes. Some of the most talented iniduals you'll ever meet, all with one thing in common: doing great work, and growing as a team.
Our value lies in Diversity, Equity, Inclusion and Belonging. We strongly believe in equality and stand against all kinds of discrimination. We dare to be unapologetically ourselves. Come join a team of explorers who are motivated by growth, and driven by results.
What you'll do
The Technical Analyst will report to the Business Intelligence Manager and work closely with marketing, performance optimization, product, business, and development teams to implement tracking for data collection, enhance business intelligence solutions using data.
- Translate business needs to technical specifications for data tracking requirements to help provide actionable insights on affiliate performance.
- Contribute to the continuous improvement/refinement of processes, tools, quality metrics, methodologies and standards with various teams and team members.
- Build and deploy tracking for data collection on client websites based on business goals (via Google Tag Manager, or another tag management tool).
- Evaluate and improve existing BI systems.
What you'll bring
- Degree or equivalent in computer science, web development or related field, or proven equivalent experience.
- Intermediate knowledge of JavaScript, jQuery, CSS, and HTML.
- knowledge of website architecture, including DOM Events, DOM Manipulation, GTM Data Layer.
- knowledge of analytics implementation (such as Google Tag Manager).
- Excellent problem-solving skills and attention to details, in addition to the ability to identify and address problems effectively.
- Strong desire and passion to learn and grow various data-related skills.
What's in it for you
- Relocation & soft landing for you and your family (applicable in Spain, if you are moving from a different city/country)
- Attractive salary
- Competitive private health & life insurance package
- Flexible working hours and remote-friendly tools and methodologies to stay connected
- Workplace perks such as coffee, fresh fruit, bread, sweets and drinks provided daily
- In addition to regular leave yearly, six weeks' paid leave for every four years of service
- Be part of a multicultural environment
- Wellbeing programme
- Different employee events throughout the year and team building activities
- Career Development training and programs to help you grow!


location: remoteus
Remote Work From Home Data Entry – (22002874)
Description
- The Data Entry Operator enters data from images into the data capture system.
- Inputs Appropriate data in prescribed format, utilizing basic knowledge of computer software or system.
- Cross References data to ensure accuracy and completeness; scans and edits for errors during entry.
Qualifications
- Posting payments
- Preparation of deposits
- NSF processing
- Credit card processing
- Daily activity balancing
- Maintenance of incoming and outgoing mail
- Entering new business as time permits
Primary Location
: United States-Remote-Remote
Job
: Associate
Organization
: HPHS – Onshore Operations

datadata🇺🇸
usa only🇺🇸
usa only
eVisit is a healthcare tech company that enables healthcare organizations to standup and deliver virtual care in customized clinical workflows. We’re at a very exciting moment - we’ve grown our customer base, we’ve raised $45M last year, and we’re accelerating the maturity of our data and analytics products and services. We’re secure enough to know what we want to do, but young enough that you’ll be able to have a massive impact on the direction, performance, and results of the Data and Analytics team.
We are creating telemedicine technology that’s on the forefront of healthcare innovation. Part of our challenge is using data to prove the impact, and potential opportunities of telemedicine to ourselves, our customers, and the market. This has been a rewarding and challenging process, and we’re continuing to expand. We are looking for an exceptional data analyst who is eager to help us on our mission to simplify healthcare delivery to everyone, everywhere.
About the Team and Job
eVisit’s data and reporting arm of the Product team is responsible for answering key business questions for both clients and internal stakeholders. When telehealth clinical workflows are implemented successfully based on insight we provide, we are able to improve patient experience, clinical quality, provider experience, and the cost of healthcare itself.
Our data and analytics team is a tight-knit, high-performing team that owns the data architecture, data products, and analytics of eVisit. We e deep into customer utilization trends, and draw insights to help drive operational excellence. We’re also developing our enterprise data architecture using AWS technologies.
As a Senior Data Engineer, you would be central to the Data and Analytics team’s success. You would face a variety of challenges ranging from sending automated custom sftp reports to clients, to working hand in hand with a data analyst and architect to bring a new data product to life.
What You Will Do
- Work cross functionally with product and engineering teams to build meaningful data assets based on defined methodologies to capture patient care journeys and episodes of care
- Design data models around a variety of different types of data including: healthcare provider data, patient authorizations and claims, medical images and reports, proprietary quality assessments, and more.
- Build reliable, performant, maintainable, secure systems and pipelines which can scale to the needs of our business.
- Transform complex (and sometimes messy) data from disparate sources into clean, coherent data sets for consumers.
- Expose data interfaces to customers, providers, and internal teams.
Requirements
Your Background
We don’t expect any candidate to have all of the following qualifications and experiences, but a successful candidate will have many of these:
- 2-4 years of professional data experience at a healthcare technology company
- 2+ years of professional experience with building data platforms and/or ETL pipelines
- Excellent SQL and Python chops, including common data packages such as pandas
- Deep familiarity with the landscape of data technologies (tools for ingestion, processing, storage, etc.)
- Demonstrated ability to turn fuzzy data into meaningful structures to be used for analytic purposes.
- Exceptional written and verbal communication skills
- Knowledge of standard medical terminology and clinical data
- Experience with Airflow
- Experience with AWS
Benefits
Location - Where’s the job?
We are 100% remote, and staying that way. We are able to hire colleagues in all 50 states and internationally. Our team was distributed internationally even before the pandemic. It’s baked into how we work. You can work from wherever you want, although this position requires a lot of collaboration so you should plan on being in or near the timezone of your team.
We can’t wait to learn more about you and meet you at eVisit!

At Fors Marsh Group (FMG), we combine the power of science and strategy to improve people's lives. Each day, we work with institutions and organizations that seek to disrupt markets, understand and influence behavior, drive action on a national scale, and create positive impact. Our approach extends far beyond our client portfolio—as a certified B Corporation and a 2020 Greenbook Top 50 Market Research Company, we make a difference in our community through corporate-sponsored employee volunteer programs and pro bono partnerships with values-aligned nonprofits. Most importantly, as a 2019, 2020 and 2021 Top Workplace, we are committed to putting people first and foster a culture that reflects that commitment. We are proud to be an equal opportunity employer, and we celebrate ersity and inclusivity as the foundation of a healthy, successful, and innovative work environment. Join us, and together we can work to ensure a better tomorrow
We are currently seeking an experienced researcher for a senior position on our Military Analytics team. Our Military Analytics team bridges the gap between traditional social science and data science, leveraging innovative analytic tools for research committed to improving the health and well-being of Service members and DoD personnel. We routinely synthesize erse sources of information, including administrative, survey, and text data, distilling complex information for policy-makers and analysts.
This inidual's primary responsibilities will be to provide subject matter and methodological expertise in areas such as machine learning, big data analysis, and text analysis; develop research designs; and oversee the work of multiple research teams conducting quantitative research. This inidual will need to be equal parts data analyst, social scientist, and project manager. This job is best for someone who enjoys solving challenging analytic problems with a large methodological toolkit, has experience extracting insights from large data sets, and thrives in a collaborative environment.
Responsibilities include:
- Applying sophisticated principles in the fields of data science and/or programming to social science research projects
- Serving as a technical lead on research projects with a data science focus
- Working with large, complex quantitative data sets to aggregate, organize, and explore data assets through a variety of techniques.
- Analyzing data and interpreting results from descriptive and inferential analyses to identify patterns and solutions. Training/applied experience with multivariate modeling, dimension reduction, and predictive analytics.
- Develop innovative, transparent, and reproducible systems to facilitate social and behavioral sciences (SBS) best practices
- Preparing technical reports, presentations, and executive summaries for analyst and non-analyst audiences, written proposals, and other internal or external communications summarizing research methods, findings, and implications.
- Managing multiple concurrent projects, including providing quality control and implementing financial controls for projects.
- Directly interfacing with team members and clients to understand their needs, manage their expectations, respond to ad hoc requests, and communicate the most pertinent results to them in a way that is useful and easy to understand.
- Managing and supporting a mixed team of analysts and researchers, including setting goals, cultivating a productive, growth-orient work environment, and team development
Qualifications:
- Master's degree (PhD preferred) in a social science field and/or data science
- A minimum of four years of post-graduate applied experience leading research projects, preferably through the use of mixed-methods approaches
- Ability to work effectively as a team leader, team member, or independently
- Demonstrated experience writing syntax in R (additional skill in other languages such as Stata, SAS, SQL and Python desired but not required)
- Demonstrated experience working with erse data sources, including survey, personnel, administrative, and text data.
- Demonstrated experience of machine learning algorithms (e.g., decision trees, k-means, random forests, SVM) and their practical uses and limitations
- Demonstrated experience with natural language processing techniques and interpretation of results
- Experience working with existing datasets and integrating from multiple sources (SQL experience a plus)
- Excellent verbal and written communications skills
- Willingness and ability to learn new research topic areas, methods, and analytic approaches.
- Demonstrated ability to lead high-performing research teams
- Applicants must be comfortable working with sensitive topics such as sexual assault and suicide ideations
- Applicants may be subject to a low-level government security investigation and must meet eligibility criteria for access to sensitive information
- US Citizenship required
- FMG requires all new hires to be fully vaccinated against COVID-19 in order to start employment, unless that person has requested and FMG has granted a medical or religious exemption during the onboarding process.
We Offer:
Our benefits typically meet or exceed our competitors' packages. Ways we are unique?
- Top-tier health, dental, vision, and long and short-term disability coverage all covered at 100% for employee coverage
- Remote work
- Our company culture, which values balance. We work around our personal realities while always accomplishing what's expected of us
- We provide a Personalized PTO Program designed to make sure that employees take the leave they need when they need it.
- Generous matching retirement contributions and no vesting period in your third month of employment
- Dedicated training and development budgets to expand your expertise and grow your skillset
- You can volunteer your way with paid time off
- You can participate in FMG staff-led affinity groups
- Our employees receive product and service discounts through the certified B Corp network

< class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;">Senior Software Engineer - Data Engineer

- Position type: full-time, indefinite contract
- Seniority: Senior Software Engineer (inidual contributor)
- Location: Remote (desired time zone between UTC-3 and UTC+3)
- Compensation: 60.000 - 65.000 EUR/year + stock options (both based on seniority level) + benefits
- Benefits: fully remote work & flexible hours; 37 days/year of vacation & holidays paid time off; sick days; health insurance allowance; company-provided equipment, remote work allowance & equipment allowance; company-sponsored in-person events; great erse & inclusive people-first culture.
< class="h1" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;"> < class="h1" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;">About the role
As a Senior Software Engineer of Data Retrieval at Athenian you can expect to have a big impact in shaping the product.
You will have the opportunity to work alongside our highly skilled team to design, build, and iterate on a world-class software web application.
You are expected to contribute to the Data Retrieval part of the backend. The Data Retrieval involves fetching, updating, and archiving all the data related to different data sources in real-time (Github, JIRA, CI/CD services, etc.). It is structured as a graph with nodes performing different tasks, implemented either as workers on Kubernetes or as Google Cloud Functions, exchanging messages through edges, implemented with Google Pub/Sub.
We are developers building a product for other developers and we build our product with a sense of pride and ownership. You will be in a collaborative environment where you will work closely together with product and engineering to understand user needs, and discuss new ideas to solve complex problems.
< class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-top: 56px; margin-bottom: 8px; color: #121343;"> < class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-bottom: 8px; color: #121343;">Responsibilities-
Be part of the Data Retrieval team to tackle the current and new challenges to be ready to handle the expected growth
-
Closely collaborate with the API and the Devops team as part of the Backend
-
Understand customers’ needs and propose ideas and discuss solutions innovating with the team on engineering and product.
-
Full professional proficiency in English, written and spoken. The ability to communicate comes first, no matter the level of technical skills.
-
Strong experience with any of the following: Go, Java, C#, C++, Rust, Ruby, Typescript (Node), Python (with typing).
-
Willing to work in Go.
-
Strong experience with PostgreSQL.
-
Strong experience with Linux.
-
Strong knowledge of Git tools and concepts.
-
Experience with different APIs.
-
Experience with event-driven backend architectures.
-
Experience with Continuous Integration and Continuous Delivery.
-
Experience with scalable backend design: distributed processing, load balancing, fault tolerance, etc.
-
Knowledge of Docker, Kubernetes.
-
Familiarity with Google Cloud Platform or similar.
- Strong experience with Go.
-
Strong experience with Google Cloud Platform (Cloud Functions, Cloud Run, Pub/Sub)
-
Knowledge of C/C++ or Rust.
-
Knowledge of Python.
-
Experience with columnar DBs like Clickhouse, Druid.
-
Experience with distributed SQL databases.
-
Experience with Terraform.
-
Experience with monitoring and alerting.
-
Experience with GitHub Actions, Circle CI, and Jenkins.
-
Having worked remotely.
-
Having worked in a dynamic start-up environment.
-
Having worked on a SaaS product.
-
Having used modern collaboration tooling (Jira, GitHub, Slack, Zoom, etc.).
-
Responsible and professional
-
Independent, goal-oriented, proactive attitude
-
Disciplined and communicative in remote environments
-
Collaborative and with a strong team-spirit
-
Curious and interested in learning new things
< class="h2" style="border-color: rgb(238 239 242/var(--tw-border-opacity)); --tw-translate-x: 0; --tw-translate-y: 0; --tw-rotate: 0; --tw-skew-x: 0; --tw-skew-y: 0; --tw-scale-x: 1; --tw-scale-y: 1; --tw-transform: translateX(var(--tw-translate-x)) translateY(var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y)); --tw-scroll-snap-strictness: proximity; --tw-border-opacity: 1; --tw-ring-inset: var(--tw-empty, ); --tw-ring-offset-width: 0px; --tw-ring-offset-color: #fff; --tw-ring-color: rgba(59,130,246,0.5); --tw-ring-offset-shadow: 0 0 #0000; --tw-ring-shadow: 0 0 #0000; --tw-shadow: 0 0 #0000; --tw-shadow-colored: 0 0 #0000; --tw-blur: var(--tw-empty, ); --tw-brightness: var(--tw-empty, ); --tw-contrast: var(--tw-empty, ); --tw-grayscale: var(--tw-empty, ); --tw-hue-rotate: var(--tw-empty, ); --tw-invert: var(--tw-empty, ); --tw-saturate: var(--tw-empty, ); --tw-sepia: var(--tw-empty, ); --tw-drop-shadow: var(--tw-empty, ); --tw-filter: var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow); --tw-backdrop-blur: var(--tw-empty, ); --tw-backdrop-brightness: var(--tw-empty, ); --tw-backdrop-contrast: var(--tw-empty, ); --tw-backdrop-grayscale: var(--tw-empty, ); --tw-backdrop-hue-rotate: var(--tw-empty, ); --tw-backdrop-invert: var(--tw-empty, ); --tw-backdrop-opacity: var(--tw-empty, ); --tw-backdrop-saturate: var(--tw-empty, ); --tw-backdrop-sepia: var(--tw-empty, ); --tw-backdrop-filter: var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia); font-weight: var(--company-header-font-weight,700); margin-top: 56px; margin-bottom: 8px; color: #121343;">Hiring process
The hiring process is composed by multiple steps:
-
CV review
-
Screening Call
-
Technical Assessment project
-
Technical Interview + Q&A
-
Architecture interview + Manager Interview
-
Communication of the outcome
At Athenian Engineering we are currently a team of 8, including 1 team lead, the Head of Analytics, and the Head of Engineering. Everyone is a world-class Senior Engineer, each with a erse area of expertise ranging from Language Analysis and System Architecture to Machine Learning on Code and modern APIs, and to modern Web Applications.
We collaborate with each other on a daily basis and we value each contribution and idea. We foster good collaboration through transparency and good communication, and we believe that teamwork is key to move fast and be successful.
-
We are inclusive and welcome ersity, we encourage applicants from all backgrounds to apply.
-
Athenian is a fully remote company. At the moment, we are 20+ people from several different countries working closely together in a fully-distributed way.
-
We put a lot of value into collaboration and feedback, no matter if it comes from our CEO, a customer, Product or Engineering because we know that the best ideas can come from anywhere.
-
We believe in transparency and collaboration, which reflects how we operate internally and externally.
-
We are humane and care about each other's growth and wellbeing.
-
Flexible hours, set your own schedule that fits you.

Headquartered in Los Angeles, Criteria Corp is a technology company dedicated to changing the way companies find and hire great talent. That's why we develop fair, objective and innovative assessment products to inform effective people decisions. Over 4,100 companies currently use our Criteria products and we are growing quickly. We apply that same dedication when it comes to hiring our own team. We owe our growth and success to a passionate team of iniduals working together to achieve a common goal.
POSITION SUMMARYThe Senior Data Engineer is responsible for expanding and optimizing the functionality of our Data Services Platform by contributing to its architecture and development enabling Data Analysts and Data Scientists to operate more effectively. The Senior Data Engineer will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects.
REQUIRED KNOWLEDGE/SKILLS/ABILITIES
To be successful in this role the incumbent will demonstrate the following:
- 5+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Passion for data, automation, analytics and understanding of new technologies and approaches to resolving problems at scale
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency, and workload management.
- Deep understanding of SaaS applications.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Strong project management and organizational skills.
- Working knowledge of modern object-oriented and functional programming languages.
RESPONSIBILITIES
The primary responsibilities of this role include:
- Create and maintain optimal data pipeline architecture.
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Architect and develop scalable and efficient data analytics patterns that can be reused across common use-cases.
- Bring new ideas from concept to implementation, write quality, testable code, and participate in design/development discussions.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data' technologies.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics.
- Work with stakeholders including the Executive, Product, and SaaS Platform teams to assist with data-related technical issues and support their data infrastructure needs.
- Create tooling to enable Data Analysts and Data Scientists team members to effectively consume and utilize the platform you will be managing.
- Thinking laterally to reduce operational costs and overheads of the platform.
- Keep our data separated and secure across national boundaries through multiple data centers and AWS regions and working with internal auditing and compliance teams to ensure appropriate guard rails are in place.
- Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
- Engage with internal and external stakeholders to gather requirements and opportunities to improve the Data Services Platform
- Developing processes and guidelines that enable the delegation of control of aspects of the platform to technically proficient users.
- Other leadership functions as required.
Team Leadership/Management
- Actively lead and coach direct reports to ensure they are fully aware of and supported to execute work requirements.
- Provide appropriate technical training and continuing professional development for all team members.

< class="h3">About Us

At Turquoise Health, we're making healthcare pricing simpler, more transparent, and lower cost for everyone. Have you or a family member ever gotten an MRI, a lab, or even a straightforward surgery without knowing the cost in advance? That's bonkers, right? We're working to fix that.
We’ve already launched our consumer-facing website that allows anyone to search and compare hospital insurance rates; something once impossible. Now, we're rolling out a suite of new products for providers, payers, employers, and patients to clean up the healthcare transaction. We want to change how the industry works from the inside out.
We're a Series A startup backed by top VCs a16z, Box Group, Bessemer Venture Partners, and Tiger Global. Most importantly, we're a multi-talented group of folks (moonlighting as authors, bass players, improv instructors, chefs, linguists, and trivia buffs) with a passion for improving healthcare. We're eager to find ambitious and well-rounded teammates to join us on this mission.
< class="h3">The Role
As a Data Strategy Lead you will become a trusted advisor to payers and providers on all things healthcare pricing data. As an expert in our Hospital Rates Database and Payer Rates Data Warehouse, you will help customers derive value to power negotiations and patient pricing strategies. You’ll work to create happy data customers, expand product adoption, and inform our data product roadmap.
You should be innovative, analytical, and resourceful, with the ability to tackle unique problems quickly. You will support projects across multiple clients, so you must be someone who can manage your time efficiently across multiple projects. This role is a mixture of technical data work in SQL (bonus for proficiency in python or R), healthcare subject matter expertise, and building strong customer relationships. As a result, the role will focus on driving active usage as a post-sales measurement of success. < class="h3"> < class="h3">Responsibilities- Partner with customers to understand their data and reporting requirements and translate them to engineering team
- Create and participate in customer education through various channels (email, blog, webinar)
- Provide comprehensive day-to-day analytics support to customers, develop tools and resources to empower data access and self-service so your expertise can be scaled
- Perform ad hoc analysis, insight requests, and data extractions to resolve critical business and infrastructure issues for our customers
- Assist customers with migrating our data to their platforms and tools (often, into a local environment that differs quite a bit from ours)
- Work closely with the product team to ensure customer feedback drives product development
- Work closely with technical partners on our data engineering team on designing and developing robust data structures and highly reliable data pipelines
- Actively solicit and prioritizing technical feedback via customer engagement
Requirements
- Excellent written communication and interpersonal skills - an equal desire to work with customers and see them succeed vs. to spend self-directed time with data
- 3+ years of analytics experience in SQL + a popular data visualization stack (Tableau, PowerBI, Looker, or others)
- 3+ years experience working in healthcare data (revenue cycle/claims, patient access, quality data, or interoperability, as examples)
- Strong analytical, problem-solving, mathematical, and creative thinking skills
- Strong technical intuition and ability to understand complex business systems
- Expert knowledge in data modeling concepts and implementation
- Fluency in Excel (no matter what, it always comes in handy)
- Hands on experience in processing extremely large data sets
- Expertise in visualization technologies including Looker, Tableau, and others
- Desire to work in a fast-paced, data-intensive environment
Benefits
- Stellar Health Care Plan options (Medical, Dental & Vision)
- Unlimited Paid Time Off
- 401K + Matching
- Family Leave (Maternity, Paternity)
- Work From Home + Paid Remote Co-working Weeks
- Stock Option Plan
- Monthly Wellness Benefit
- Annual Learning & Development Benefit
- Company-provided equipment (Laptop, Mouse, Keyboard, Monitor)
Job Location
Turquoise Health is a fully remote company based in the US. Our founding team is located in California (San Diego) and London. We work with team members and contractors in the US and around the world, but we operate on US business hours and work with clients entirely based in the US.
For this role, we are seeking US-based candidates.

As a Data Entry Operator II, you will directly affect Veterans’ ability to access their hard-earned benefits by completing necessary tasks that digitize critical documents required for pension, disability, and other benefits approval and disbursement. There’s no job too small when it affects our nation’s veterans.
Starting wage of up to $20.32 per hour (hourly wage, including fringe benefits).
Scheduled Hours:
Monday-Friday: 12pm-8:30pm/9am-5:30pm we also have flexible schedules
Saturdays: Scheduled based on production need.Training hours:
Onsite Monday-Friday: 7:30am-11:30am until government clearance and access is obtained to work remotely.
First day/training would begin on Friday, August 26th.Purpose: The Data Entry Operator II operates a computer to transcribe data into a format suitable for computer processing. This position requires the application of experience and judgment in selecting procedures to be followed, and searching for interpreting, selecting, or coding items to be entered from a variety of document sources.
*This is a work from home position. You can work from anywhere
Essential Duties and Responsibilities:
- Logs into a computer and accesses work queues to review computer images of documents and code documents using an established list of codes.
- Verify, if required, whether previously extracted information is correct and make corrections to previously extracted information as needed.
- Adjust orientation or lighting of documents.
- Reference work instructions as needed.
- Required to attend mandatory meetings and trainings, work scheduled overtime with minimal notice, and perform other duties as assigned per business needs.
Qualification Requirements –
To perform the job successfully, an inidual should demonstrate the following:
- Must be at least 18 years of age.
- Able to read, write and speak English.
- Able to maintain confidential information.
- Successfully pass and maintain acceptable background checks and security clearances.
- Basic computer knowledge.
- Able to type 8,000 keystrokes per hour.
Equal Opportunity Employer/Protected Veterans/Iniduals with Disabilities

Riverflex is looking for a pragmatic & hands-on Senior Data Transformation Lead, based in the Netherlands, who will be able to develop and drive the transformation of the client's Data Differentiation Platform team.
Our client is revising their data strategy and operating model and are transitioning into a Data as a Product architecture and operating model.
What you will be doing as the Tech Transformation Consultant:
- Share and discuss the overall operating model to a team of the tech organization (300-400 employees) to ensure they are aware / understand the new operating model
- Develop a tailored transformation plan to apply the operating model into the Data Differentiation Platform team
- Develop and implement new ways of working for the operating model
- Identify other transformation gaps and make recommendations
- Drive changes as a “transformation lead” project managing, tracking, following-up, managing risks and issues and ensuring delivery
- Implement data (quality) management and governance and translate theoretical policies into pragmatic enforceable iterative steps
- Collect, aggregate & align existing data governance policies and requirements across the organization
- Decommission existing data platforms currently in place and transition into the new operating model & architecture
What you will need to succeed:
- 7+ years of experience in leading, developing, and structuring large-scale data transformations as a transformation program manager (or a similar role)
- Consulting and/or Change management background in data strategy, preferably in a top-tier environment
- Experience in agile delivery / agile transformation and understanding of modern tech ways of working (DevOps / Cloud / Agile / outcome-based planning)
- Well-versed in best practices of data governance & data management (e.g. DMBOK expertise) and experience with a pragmatic approach to implementing data governance/data management processes
- Experience in data migration from a legacy platform
- Ability to overcome resistance to change, have a driving mentality and highly persuasive in nature
- Entrepreneurial mindset and looking to work in a small team and make processes and activities scalable
- Ability to perform in a fast-paced environment
- Independence and the ability to be self-motivated
- Already residing in the Netherlands
About Riverflex Consulting
Riverflex is a global collective of consultants united by a mission to drive the sharpest edge of business.
We harness the true power of open talent by assembling high-performing teams of top-tier independent professionals to drive digital growth and innovation for our clients.
So far, we’ve supported over 20 world-leading brands such as Nestle, Ahold Delhaize, IKEA, Samsung, Fenix Outdoor, and PVH with digital consulting, technology, data, and talent services.
The Riverflex team is a group of multi-disciplinary and erse professionals that come from every corner of the world: Portugal, Hong Kong, Turkey, South Africa, Pakistan, Germany... You name it! We have office locations in Amsterdam, Barcelona, London, and Istanbul and are always looking for opportunities to expand our vision.


3-4 hours pt ov3-4 hours pt ovdatadata
< class="h1">About Plato

Plato is on a mission to empower engineering and product teams to unlock their full potential. Using our proprietary Talent Growth Platform, we connect engineering and product professionals with industry experts for personalized talent coaching, powerful 1-1 mentorship sessions with our incredibly mentor community, and networking opportunities that drive personal and professional growth.
We're backed by a group of impressive advisors and investors including SaaStr, Y Combinator, the Slack Fund, S28 Capital, Eric Yuan (Zoom), Mathilde Collin (Front), Andrew Miklas (PagerDuty), and many more!
About the Lead Data Engineer Role
-
In this role, you'll have the opportunity to join us as an early member of the Data Engineering team. You'll be setting direction, choosing tooling and designing the framework Plato will build on. This is a hands-on role to start but you will be building a team that will be responsible for driving data priorities forward and making a large-scale impact across Plato
What You’ll Do:
-
Work independently to drive forward Plato's data priorities as the founding member of the company's data team (within the engineering team)
-
Advise and select tooling for our ETL and reverse ETL platforms. Design, develop and maintain ETL platforms for various business use cases which are fault-tolerant, highly distributed, and robust.
-
Define and execute best practices for no-code data exploration within PlatoWork on structured, semi-structured data to put company data to business use.
-
Analyze large sets of structured and semi-structured data for business analytics and design tooling for non-engineers.
-
Work with technical and non-technical team members to advise on data.
-
Integrate new sources of data required by Plato while ensuring best practices are met.
-
Fully own and manage the data lake and all of the data and structure within it
What We’re Looking For
-
7+ years experience in handling data pipelines, data warehouses and data lakes.
-
You understand large data sets and their application to business.
-
You're comfortable as the data expert at a company and explaining the business significance throughout the schema.
-
You have the ability to ingest large amounts of SQL and NoSQL data and ensure that the data is beautifully structured and formatted in the data warehouse in such a way that is consistent and easy to utilize and leverage and understand.
-
You have experience as a team-lead, setting strategy and mentoring others.
-
You take large and complex projects and create clear prioritization and estimations around each task.
-
You have experience with Airbyte, Segment or similar tools
-
You enjoy working cross-functionally to answer questions and adviseYou're excited about building a scrappy, fast-moving tech startup!
< class="h1">Why Choose Plato
At Plato, you’ll be given the opportunity to contribute to something truly meaningful that positively impacts thousands of people around the world. Aligning with our mission, we believe that by investing in our team members’ personal and professional development, we can unlock Plato’s full potential and build a thriving work environment for the greater tech community. Even a 10% improvement here will lead to a ripple effect that will benefit our community.
We spend nearly all of our waking lives at work, let’s make it better! Here are some of the ways we do that at Plato:
-
Work from anywhere: On our fully-remote team, you can work from anywhere in the world as long as you have 3-4 hours of PT overlap for non-customer facing roles and 5 hours PT overlap for customer-facing roles.
-
Unlimited responsible time off: Take time away to do what you love and recharge with unlimited responsible time off.
-
Competitive compensation and opportunity for advancement: Grow within your role or try something new with opportunities for advancement within Plato.
-
Comprehensive benefits package: Medical, dental, and vision coverage to keep you happy and healthy.
-
In-person team building activities: We bring our team members together for regular in-person events in awesome locations like the Metaverse, France, Mexico, New Orleans, California, and Spain to name a few!
-
Work with amazing companies: Hundreds of top technology companies have chosen Plato to strengthen their teams including DocuSign, Box, Segment, Rakuten, SurveyMonkey, and Betterment.
-
A erse team from around the world: Work and learn from a group of erse team members from around the world including the United States, Canada, France, Spain, South Africa, Germany, Poland, Switzerland, India, Brazil, and more.
Plato is an equal opportunity employer that is committed to inclusion and ersity in the workplace. All qualified applicants will be considered for employment without regard to race, color, religion, sex, sexual orientation, age, nationality, disability, protected veteran status, gender identify, or any other factor protected by applicable federal, state, or local laws.
Learn more about your equal employment opportunity (EEO) rights as an applicant here.
Plato is committed to working with and providing reasonable accommodations to iniduals with disabilities. If you need reasonable accommodation because of a disability for any part of the employment process, please provide us with additional information on the nature of your request.

Company Description
Givebutter is the #1 rated fundraising platform on G2, powering more than $150M in donations annually for thousands of good causes. Over the past two years, our team of 3 has become a fully remote team of 30+ Butter Slices, spread across 21 states and five countries. We've been certified as a Great Place to Work in 2021 and 2022, and our founders were recently featured on Forbes’ 30U30 list for Social Impact.
Our mission is to empower the changemaker in all of us. We believe giving should be fun, so you’ll want to do it again, and we also believe that work should be fun, so that you’ll have the greatest impact. We are excited to hear from talented people who want to work with other talented people in making the world a butter place—and have fun along the way.
Job Description
Coming Soon...
Benefits
Remote Work: Work from anywhere in the United States.
Health Insurance: We offer Medical, Dental, and Vision insurance covered 100% for employees.
Vacation and Holiday: Take up to 15 days of paid vacation time per year plus additional days per year with the company. We also take off all 10+ Federal Holidays and provide 2 floating holidays.
Parental Leave: We offer 12 weeks paid parental leave for both primary and secondary caregivers.
Team Retreats: Participate in team trips each year to meet and work in person as soon as it is safe to do so.
Home Office Stipend: Upgrade your home office with company-sponsored expenses including high-quality laptops, monitors, and modern technology.
Charitable Giving: Employees are encouraged to donate up to $50/month to any verified nonprofit they wish to support on Givebutter.
Professional Development: We offer learning and development reimbursement opportunities.
Love What You Do: We are a mission-driven company serving the charitable sector. Feel good about the work you're doing and the company you work for.



datadataukuk
Would you like to join a leading Digital Solution Company and contribute to innovative solutions that are built on a daily basis?
Our client is a leading Digital Solution Company for Business IT Solutions and digital transformation. Founded in 1996, our client's company started off as an IT consulting partner for one of the most recognized brands in the agricultural equipment manufacturing industry. They provide a comprehensive portfolio of services and solutions that not only solve today’s IT challenges but also address tomorrow’s business priorities.
The Microsoft Service Line is looking for a Data Modeler who thrives on challenges and has the desire to make a real difference in the world of Business. This is an exciting opportunity for self-starters who are passionate about getting things done, think strategically and out-of-the-box, and are committed to driving excellence.
Start your day with the flexible morning hours and:
- Develop cloud solution design for Enterprise customers
- Utilize knowledge of best practices to building highly scalable, robust, secure and sustainable solutions using Azure Platform.
- Transform business requirements into modern data models and implement them in SQL DB’s
- Estimate technical requirements and breaks down work to user stories and tasks
- Leading project teams in delivering intelligent cloud applications
- Mentor team members in both technical and process-related areas
- Design and work on proof of concepts that can be demonstrated to the customer
- Become customer's trusted technical advisor and subject-matter expert
- Lead client engagements and present solutions to both technical and business stakeholders and facilitate strategic discussions.
- Stay updated on Azure services, and contribute towards capability, competency building on Azure
- Understand Enterprise Application design framework and processes
- Establish cloud best practices and review code.
We are happy to hear from you if you have:
- A minimum of 6+ years’ experience in data architects to design databases that meet organizational needs using conceptual, logical, and physical data models.
- Experience in design and implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms (SQL/NoSQL).
- Good knowledge of metadata management, data modeling, and related tools (Visio, Erwin or ER Studio or others) required.
- Experience in designing cloud solutions for scalability, high availability in Azure/AWS preferred.
- ETL, data integration and data migration design experience.
- Scripting experience in any one of the following: SQL, Python, pyspark or SCALA
You will love to join this company for:
- B2B contract
- All the equipment needed for remote work will be provided by the client
- Competitive package in line with the best market standards
- Fully remote work
- Work-life balance
- Agile work environment

< class="h3">Company Description

"The YMCA of San Diego County is the leading nonprofit committed to strengthening iniduals and communities across the country. Every day, we help people connect to their purpose, potential and each other. Working locally, we focus on supporting young people, improving health and well–being and inspiring action in and across communities." The YMCA of San Diego County is proud to be an Equal Opportunity Employer/Affirmative Action Employer Minority/Female/Disability/Vets. We are committed to a erse workforce
< class="h3">Job Description
- Maintains database by entering new and updated customer and account information.
- Prepares source data for computer entry by compiling and sorting information.
- Establishes entry priorities.
- Processes customer and account source documents by reviewing data for deficiencies.
- Resolves deficiencies by using standard procedures or returning incomplete documents to the team leader for resolution.
- Enters customer and account data by inputting alphabetic and numeric information on keyboard or optical scanner according to screen format.
- Maintains data entry requirements by following data program techniques and procedures.
- Verifies entered customer and account data by reviewing, correcting, deleting, or reentering data.
- Combines data from both systems when account information is incomplete.
- Purges files to eliminate duplication of data.
- Tests customer and account system changes and upgrades by inputting new data.
- Secures information by completing data base backups.
- Maintains operations by following policies and procedures and reporting needed changes.
- Maintains customer confidence and protects operations by keeping information confidential.
- Contributes to team effort by accomplishing related results as needed.
- Organization skills
- Quick typing skills
- Attention to detail
- Computer savvy
- Confidentiality
- Thoroughness
- Coordinate facilities and furnishings maintenance including assembling furniture
- Effectively and promptly communicate building and maintenance related issues to supervisor and Department Head
- Assist in the maintenance of clean, attractive, safe and well-repaired facilities
- Understand the approved use of various chemicals and cleaning agents and be able to use and follow all MSDS materials/precautions and directions associated with them
- Conducts unit inspections as required by the program policies and guidelines
- Assist in all necessary repairs and upkeep of program properties, including painting, fire safety, changing door knobs, key copies etc
- Monitor YMCA vehicle maintenance schedule and other vehicle documentation, performs vehicle inspections, reports issues to supervisor and Department Head, and coordinate vehicle repairs with approved vendors
- Oversee safety program and inform management of any unsafe conditions
- Follow all fire, health, licensing, and YMCA related standards and property lease requirements
- Must be accessible to facility at all times and carry facility cell phone on and off duty
- Assist and coordinate intakes and departures
- Oversee delivery of consistent services; work with clients, public/private organizations and program staff
- Assess need for and assist in the coordination of all facility renovations and upgrades
- Perform preventative maintenance on equipment throughout the facility
- Repair and/or replace equipment as needed
- Teach Independent Living Skills and provide support to participants in relational wellness
- Provide crises intervention and mediation to participants
- Communicate effectively with Department Head, co-workers, participants, families, and other service providers
- Provide a secondary on-call option during specified times
- Ensure participants maintain safe and adequate living environment
- Work independently and as a team member to create innovative ways to meet contract goals and provide a high level of service that is engaging and meets the needs of the participants
- Transport participants in YMCA vehicle as needed
- Practice non-aggressive defensive driving techniques
- Operate YMCA vehicles in safe manner
- Vehicle use is for YMCA business only
- Perform other administrative tasks as assigned and assist with maintaining proper document storage and auditing
- Participate in and attend all required staff meetings, trainings, YMCA staff development events and appropriate agency-wide committees
- Ability to work effectively with others in alignment with the YMCA 4 Core Values
- Models the 4 Core Values in all aspects of position responsibilities
The YMCA of San Diego County is proud to be an Equal Opportunity Employer/Affirmative Action Employer Minority/Female/Disability/Vets. We are committed to a erse workforce.


datadataukuk
Would you like to join a leading Digital Solution Company and contribute to innovative solutions that are built on a daily basis?
Our client is a leading Digital Solution Company for Business IT Solutions and digital transformation. Founded in 1996, our client's company started off as an IT consulting partner for one of the most recognized brands in the agricultural equipment manufacturing industry. They provide a comprehensive portfolio of services and solutions that not only solve today’s IT challenges but also address tomorrow’s business priorities.
Start your day with the flexible morning hours and:
- Find solutions to business challenges by evaluating emerging technologies and the evolution of current technologies
- Participate in deep architectural and requirement analysis discussions to build client confidence and ensure success in building new and migrating existing applications, software and services onto the Data Platform
- Balance desire for clean architecture design with pragmatism on delivery, including consideration of minimization of technical debt
- Define a future vision roadmap for optimal applications of technology to meet business needs
- Evolve, direct and govern the development of frameworks, standards, policies, principles and procedures that guide technology decisions and maximize reuse of technology
- Chair solution forums, ensuring adherence to architectural standards, and consistency across organizational and design boundaries
- Develop the component, integration and technical architecture for project and platform initiatives, and collaborates with project teams to realize them
- Establish credibility & build deep relationships with senior technical staff to generate common understanding and relate their challenges to platform development strategy
- Contribute to strategic plans for technology, which satisfy the current and ongoing needs of the firm's business strategy, and the current and future capabilities of technology
We are happy to hear from you if you have:
- Deep knowledge, hands-on experience and conceptual/architectural understanding of many flavors of cloud technologies
- Experience in designing for microservices and event-driven architectures
- Ability to bridge the gap between business needs/direction and the possibilities and constraints surrounding technical implementation
- Ability to work as part of a number of global project teams providing architectural direction
- Ability to assist business partners to define and prioritize their requirements
- Ability to influence, manage and facilitate change, within internal teams and externally in matrix environment
- Ability to think conceptually and pragmatically to determine feasible solutions for implementation
- Strong interpersonal skills to resolve problems professionally, lead groups, negotiate and create consensus
- Practical experience of developing secure solutions based on major cloud vendors, such as Azure
- Ability to lead or conduct technology evaluations and POCs against specific objectives
- Proven experience working as a lead developer and architect
You will love to join this company for:
- B2B contract
- All the equipment needed for remote work will be provided by the client
- Competitive package in line with the best market standards
- Fully remote work
- Work-life balance
- Agile work environment

Updated over 3 years ago
RSS
More Categories
worldwide