Full-Time

Language Arts Instructor

Confirmed live in the last 24 hours

Art of Problem Solving

Art of Problem Solving

501-1,000 employees

Advanced math education for motivated students

Compensation Overview

$40Hourly

Entry, Junior

No H1B Sponsorship

Mountain View, CA, USA

Category
Education
Requirements
  • A Bachelor’s degree is required.
  • Strong content knowledge in reading, writing, and/or grammar.
  • Experience teaching or tutoring students.
Responsibilities
  • Teach Engaging Curriculum: Use company-created curriculum and materials designed for advanced students to lead small classes in language arts.
  • Engage Students: Actively involve students in each class for student-led learning.
  • Classroom Management: Expertly manage up to 16 students.
  • Grade & Provide Feedback: Provide feedback on tests.
  • Build Relationships: Connect with students and families to make a lasting impact on their educational journey.
  • Inspire Learning: Encourage a love for learning and critical thinking in language arts.
Desired Qualifications
  • A Bachelor’s degree in education or a humanities field is strongly preferred.
  • Classroom teaching experience at the K-12 level is preferred.
Art of Problem Solving

Art of Problem Solving

View

Art of Problem Solving (AoPS) specializes in advanced math education for middle and high school students, offering online classes, textbooks, and math games. The courses cover a range of topics from prealgebra to calculus and include computer science classes. AoPS distinguishes itself by focusing on challenging content and problem-solving skills for motivated students and their parents. The goal is to prepare students for competitive exams and support those aiming for excellence in mathematics and STEM careers.

Company Size

501-1,000

Company Stage

N/A

Total Funding

N/A

Headquarters

San Diego, California

Founded

2003

Simplify Jobs

Simplify's Take

What believers are saying

  • Increased interest in AI-driven math tools could lead to new partnerships for AoPS.
  • Open-source AI models allow AoPS to access cutting-edge technology without significant investment.
  • The trend of AI in education could help AoPS develop innovative educational products.

What critics are saying

  • Advanced AI models like Light-R1-32B may reduce demand for traditional math resources.
  • Free AI models like Gemini 2.0 could pressure AoPS's premium offerings.
  • The competitive landscape in AI-driven educational tools is rapidly expanding.

What makes Art of Problem Solving unique

  • AoPS specializes in advanced math education for grades 5-12, unlike standard curricula.
  • The company offers Olympiad-level courses, setting it apart from typical math programs.
  • AoPS provides both math and computer science classes, broadening its educational scope.

Help us improve and share your feedback! Did you find this helpful?

Benefits

Health Insurance

Dental Insurance

Vision Insurance

401(k) Retirement Plan

401(k) Company Match

Paid Vacation

Relocation Assistance

Flexible Work Hours

Hybrid Work Options

Performance Bonus

Company News

VentureBeat
Mar 5th, 2025
New Open-Source Math Model Light-R1-32B Surpasses Equivalent Deepseek Performance With Only $1000 In Training Costs

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. A team of researchers has introduced Light-R1-32B, a new open-source AI model optimized for solving advanced math problems, making it available on Hugging Face under a permissive Apache 2.0 license — free for enterprises and researchers to take, deploy, fine-tune or modify as they wish, even for commercial purposes. The 32-billion parameter (number of model settings) model surpasses the performance of similarly sized (and even larger) open source models such as DeepSeek-R1-Distill-Llama-70B and DeepSeek-R1-Distill-Qwen-32B on third-party benchmark the American Invitational Mathematics Examination (AIME), which contains 15 math problems designed for extremely advanced students and has an allotted time limit of 3 hours for human users.Developed by Liang Wen, Fenrui Xiao, Xin He, Yunke Cai, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang, the model surpasses previous open-source alternatives on competitive math benchmarks.Incredibly, the researchers completed the model’s training in fewer than six hours on 12 Nvidia H800 GPUs at an estimated total cost of $1,000. This makes Light-R1-32B one of the most accessible and practical approaches for developing high-performing math-specialized AI models. However, it’s important to remember the model was trained on a variant of Alibaba’s open source Qwen 2.5-32B-Instruct, which itself is presumed to have had much higher upfront training costs.Alongside the model, the team has released its training datasets, training scripts, and evaluation tools, providing a transparent and accessible framework for building math-focused AI models.The arrival of Light-R1-32B follows other similar efforts from rivals such as Microsoft with its Orca-Math series.A new math king emergesLight-R1-32B is designed to tackle complex mathematical reasoning, particularly on the AIME (American Invitational Mathematics Examination) benchmarks

VentureBeat
Jan 22nd, 2025
Google Releases Free Gemini 2.0 Flash Thinking Model, Pressuring Openai’S Premium Strategy

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreGoogle has quietly released a major update to its popular artificial intelligence model, Gemini, which now explains its reasoning process, sets new performance records in mathematical and scientific tasks, and offers a free alternative to OpenAI’s premium services.The new Gemini 2.0 Flash Thinking model, released Tuesday in the Google AI Studio under the experimental designation “Exp-01-21,” has achieved a 73.3% score on the American Invitational Mathematics Examination (AIME) and 74.2% on the GPQA Diamond science benchmark. These results show clear improvements over earlier AI models and demonstrate Google’s increasing strength in advanced reasoning.“We’ve been pioneering these types of planning systems for over a decade, starting with programs like AlphaGo, and it is exciting to see the powerful combination of these ideas with the most capable foundation models,” wrote Demis Hassabis, CEO of Google DeepMind, in a post on X.com (formerly Twitter).Our latest update to our Gemini 2.0 Flash Thinking model (available here: https://t.co/Rr9DvqbUdO) scores 73.3% on AIME (math) & 74.2% on GPQA Diamond (science) benchmarks. Thanks for all your feedback, this represents super fast progress from our first release just this past… pic.twitter.com/cM1gNwBoTO — Demis Hassabis (@demishassabis) January 21, 2025Gemini 2.0 Flash Thinking breaks records with million-token processingThe model’s most striking feature is its ability to process up to one million tokens of text — five times more than OpenAI’s o1 Pro model — while maintaining faster response times. This expanded context window allows the model to analyze multiple research papers or extensive datasets simultaneously, a capability that could transform how researchers and analysts work with large volumes of information.“As a first experiment, I took various religious and philosophical texts and asked Gemini 2.0 Flash Thinking to weave them together, extracting novel and unique insights,” Dan Mac, an AI researcher who tested the model, said in a post on X.com. “It processed 970,000 tokens in total