5 to 8 years
RoleAll Other Remote
eSimplicity is modern digital services company that delivers innovative federal and commercial IT solutions designed to improve the health and lives of millions of Americans while defending our national interests. Our solutions and services improve healthcare for millions of Americans, protect our borders, and defend our country on the battlefields supporting the Air Force, Space Force, and Navy.
eSimplicity's people-centric approach aims to transform the American healthcare experience through innovative technologies. Our team’s experience spans various federal civilian customers on diverse projects across its core competencies. Our priority is to safeguard our community by leading the government’s cloud migration, developing artificial intelligence models to identify fraudulent Medicare claims, and accelerating access to data and insights.
We are looking for a Data Scientist to analyze large amounts of raw information to find patterns that will help our customer make business decisions and meet their mission. We will rely on you to build data products to extract valuable business insights.In this role, you should be highly analytical with a knack for analysis, math, and statistics. We also want to see a passion for machine-learning and research.
- Develop data-driven solutions explicitly tailored toward the needs of our customer.
- Utilize analytical, statistical, and programming skills to collect, analyze, and interpret large data sets
- Collect data through means such as analyzing business results or by setting up and managing new studies.Create new, experimental frameworks to collect data and custom data models and algorithms to apply to data sets. Identify valuable data sources and build tools to automate data collection.Standardize data ingestion and processing pipelines to scale with increased usage and utilization
- Transfer data into a new format to make it more appropriate for analysis
- Undertake preprocessing of structured and unstructured data
- Develop (Analytics, AI/ML) and interpret large dataset engineering: data augmentation, data quality analysis,data analytics (anomalies and trends), data profiling, data algorithms, and (measure/develop) data maturity models and develop data strategy recommendations.
- Search through large data sets for usable information. Correlate similar data to find actionable results. Analyze large amounts of information to discover trends and patterns
- Build predictive models and machine-learning algorithmsto increase and optimize user experiences, system capabilities and other business outcomes. Combine models through ensemble modeling.
- Create reports and presentations for business uses. Present information using data visualization techniques
- Propose solutions and strategies to business challenges. Develop processes and tools to monitor and analyze model performance and data accuracy.Assess effectiveness and accuracy of new data sources and data gathering techniques.
- Collaborate with engineering and product development teams
- Documenting, improving, and maintainingdata strategies and artifacts, including logical and physical data models, data dictionary, data roadmap, and data security policies, using industry best practices and adhering to federal standards
- Auditing and reverse-engineering business rules in legacy systems, and building data connectors for integrating them into the Data and Analytics platform
- Providingsubject matter expertise and leading data and architecture review meetings
- Reviewing and improving data governance policies and processes
- Minimum of 8 years applicable data science and/or analysis experience.
- A Bachelor’s degree in Computer Science, Information Systems, Engineering, Math, or other related scientific or technical discipline. With eight years of general information technology experience and at least four years of specialized experience, a degree is not required.
- 5+ years of experience in cloud data architecture (AWS preferred) and big data technologies, including Databricks, EMR, Hive, Spark, AWS Glue, Redshift, and Airflow
- 5+ years of experience querying databases and using statistical computer languages: R, Python, SLQ,etc.
- Experience using (AWS) cloud services and API’s
- Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
- Experience visualizing/presenting data for stakeholders using BI tools and open-source library packages from Python or R etc.
- Experience with distributed data/computing tools: Map/Reduce, Hadoop, Spark, etc.
- Knowledge and experience with Python, R and/or Go programming language
- Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, Federated Learning, Encrypted Learning, HomomorphicEncryption,etc.) and their real-world advantages/drawbacks.
- Experienced and knowledgeable analyzing and training models from sensitive data including advantages or disadvantages of various techniques.
- Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests, and proper usage, etc.) and experience with applications.
- Experience in data mining. Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc.
- Familiarity with data management tools
- Understanding of machine-learning and operations research
- Ability to visualize data in the most effective way possible for a given project or study
- Ability to communicate complex data in a simple, actionable way
- Ability to work independently and with team members from different backgrounds
- Analytical and problem-solving skills; Analytical mind and business acumen
- Excellent attention to detail
- Exceptional technical writing skills
- Strong math skills (e.g. statistics, algebra)
- Problem-solving aptitude
- Excellent communication and presentation skills
- Flexible and willing to accept a change in priorities as necessary.
- Ability to work in a fast-paced, team-oriented environment
- Experience with Agile methodology, using test-driven development.
- Experience with Atlassian Jira/Confluence.
- Excellent command of written and spoken English.
- Ability to obtain and maintain a Public Trust; residing in the United States
- Graduate degree in Data Science or other quantitative field
- Experience with data orchestration frameworks such as Apache Airflow and Luigi
- Experience with architecting, scaling, and managing multi-tenant data platforms
- Experience with data lake architectures and building ETL pipelines to ingest, process, and store data
- Experience with analytical tools such as SAS Viya, Databricks, AWS Sagemaker, AWS QuickSight, and EMR Notebook/Studio
- Experience with MLOpsCICD practices/tools and IaC tools such as Ansible, Terraform, and CloudFormation
- Experience with healthcare quality data including Medicaid and CHIP provider data, beneficiary data, claims data, and quality measure data.
eSimplicity supports a remote work environment operating within the Eastern time zone so we can work with and respond to our government clients. Expected hours are 9:00 AM to 5:00 PM Eastern unless otherwise directed by your manager.
We offer highly competitive salary, full healthcare benefits, performance bonus, and a flexible leave policy.
Equal Employment Opportunity:
eSimplicity is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender, age, status as a protected veteran, sexual orientation, gender identity, or status as a qualified individual with a disability.
Jobs by Expertise
Jobs by Skill
© Copyright AllRemote 2022. All Rights Reserved