Facebook pixel

Intern – Clinical Data Science
Confirmed live in the last 24 hours
Locations
San Carlos, CA, USA
Experience Level
Intern
Desired Skills
Apache Spark
AWS
Data Analysis
Data Science
Data Structures & Algorithms
Elasticsearch
Git
Keras
Management
Pytorch
Redshift
Tableau
Tensorflow
Natural Language Processing (NLP)
Python
Scikit-Learn
NoSQL
Power BI
Requirements
  • Must be at least 18 years old
  • Must have a minimum GPA of 2.8
  • Authorized to work in the United States without Sponsorship now or in the future
  • Must be currently enrolled as a full-time student in a Bachelor's/Masters/MBA/PhD program at an accredited US based university or college
  • Must be a Rising Sophomore, Junior, or Senior in undergrad or a Graduate or Doctoral Student
  • Must be enrolled full-time in the Fall Semester at an accredited university/college after the completion of the internship
  • Must be able to complete a 10-12 consecutive week internship between May and August
  • Must be able to relocate if necessary and work at the designated site for the duration of the internship
  • Must show proof of full COVID-19 vaccination and first booster shot
  • Graduate studies in Computer Science or Applied Mathematics, undergraduate studies in Computer Science and relevant graduate studies in the life sciences with a focus on AI/ML techniques, or undergraduate studies in Computer Science and equivalent work history. Candidates with graduate studies in Computer Science and biological sciences or equivalent work history will be highly competitive
  • Expertise in end-to-end data science techniques
  • Proficient in machine learning, information retrieval, and applied statistics
  • Ability to do exploratory analysis on large volumes of data and find key descriptive and inferential properties
  • Strong fundamentals in statistic and probability theory
  • Strong background in mathematical modeling, problem solving, algorithm design and complexity analysis
  • Develop effective data science solutions by applying ML/AI (deep learning, NLP, Causal inference methods) to deliver business value
  • Strong Python (2+ years) programming skills, with an ability to manipulate large and sophisticated datasets using distributed computing technologies (e.g., Apache Spark)
  • Knowledge of cloud services (e.g. AWS) and developing data science projects
  • Experience building Machine Learning models and libraries like Scikit-learn, Keras, Tensorflow, Pytorch, FastText, etc
  • Software development methodologies and tools (unit tests, code reviews, Git)
  • Self-motivated, fast learner, excellent communication, presentation, interpersonal, and analytical
Responsibilities
  • Create experiments and prototype implementations of new learning algorithms and prediction techniques
  • Collaborate with scientists, engineers, product managers and business stakeholders to design and implement software solutions
  • Use machine learning best practices to ensure a high standard of quality for all of the team deliverables
  • Integrity (Doing What's Right)
  • Inclusion (Encouraging Diversity)
  • Teamwork (Working Together)
  • Excellence (Being Your Best)
  • Accountability (Taking Personal Responsibility)
Desired Qualifications
  • Proficiency with MS Office Suite
  • Ability to identify issues and seek solutions
  • Ability to work both independently and collaboratively
  • Demonstrated commitment to inclusion and diversity in the workplace
  • Efficient, organized, and able to handle short timelines in a fast-paced environment
  • PhD student in Computer Science
  • Understanding and application of best practices in machine learning, software engineering, and/or production deployment of ML services
  • Track record of contributing to open-source projects
  • Understanding of modern ML Architectures, Platforms, and backend systems
  • Mentality of commit early and often, metrics before models, and shipping high quality production code
  • Extensive experience applying theoretical models in an applied environment
  • Experienced with engineering and architecting data lakes, data warehouses, and big data storage and compute platforms on AWS. In addition, experienced with modern high - performance columnar storage formats such as Apache Parquet and Optimized Row Columnar (ORC). Familiarity with NoSQL, experience with ETL framework like Airflow
  • Experienced with development tools and data cataloging, search, analysis, visualization, and reporting tools such as Python, SAS, Tableau, Power BI, and various Amazon Web Services tools (e.x. S3, Glacier, RDS, Redshift, EC2, Athena, EMR, Glue, Elasticsearch, Lambda, Textract, Kendra, SageMaker)
  • Experienced with building and modeling a data sciences platform that address technology, process, and people. This includes understanding and building a data layer for data capture, ingestion, ETL, and data set management, analysis layer for analytics, compute, and batch processing, end user spaces for search, visualization, interactive tools, and self-service
Gilead Sciences

10,001+ employees

Critical disease biopharmaceutical development
Company Overview
Gilead’s mission is to discover, develop and deliver innovative therapeutics for people with life-threatening diseases. The company is committed to creating a healthier world for everyone through their research, development of forward medicines, and clinical trials.
Benefits
  • Paid family time off and paid parental time off
  • Generous 401(k) contribution matching
  • Comprehensive medical plans that cover both physical and mental healthcare
  • Global Wellbeing Reimbursement
  • Time Off
  • Global Volunteer Day
  • Giving Together Program
  • Employee Support Programs
  • Flexible Work Options
Company Core Values
  • Integrity: Doing What’s Right
  • Inclusion: Encouraging Diversity
  • Teamwork: Working Together
  • Accountability: Taking Personal Responsibility
  • Excellence: Being Your Best