Full-Time
Confirmed live in the last 24 hours
Advanced math education for motivated students
$40Hourly
Entry, Junior
No H1B Sponsorship
Mountain View, CA, USA
Art of Problem Solving (AoPS) specializes in advanced math education for middle and high school students, offering online classes, textbooks, and math games. The courses cover a range of topics from prealgebra to calculus and include computer science classes. AoPS distinguishes itself by focusing on challenging content and problem-solving skills for motivated students and their parents. The goal is to prepare students for competitive exams and support those aiming for excellence in mathematics and STEM careers.
Company Size
501-1,000
Company Stage
N/A
Total Funding
N/A
Headquarters
San Diego, California
Founded
2003
Help us improve and share your feedback! Did you find this helpful?
Health Insurance
Dental Insurance
Vision Insurance
401(k) Retirement Plan
401(k) Company Match
Paid Vacation
Relocation Assistance
Flexible Work Hours
Hybrid Work Options
Performance Bonus
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More. A team of researchers has introduced Light-R1-32B, a new open-source AI model optimized for solving advanced math problems, making it available on Hugging Face under a permissive Apache 2.0 license — free for enterprises and researchers to take, deploy, fine-tune or modify as they wish, even for commercial purposes. The 32-billion parameter (number of model settings) model surpasses the performance of similarly sized (and even larger) open source models such as DeepSeek-R1-Distill-Llama-70B and DeepSeek-R1-Distill-Qwen-32B on third-party benchmark the American Invitational Mathematics Examination (AIME), which contains 15 math problems designed for extremely advanced students and has an allotted time limit of 3 hours for human users.Developed by Liang Wen, Fenrui Xiao, Xin He, Yunke Cai, Qi An, Zhenyu Duan, Yimin Du, Junchen Liu, Lifu Tang, Xiaowei Lv, Haosheng Zou, Yongchao Deng, Shousheng Jia, and Xiangzheng Zhang, the model surpasses previous open-source alternatives on competitive math benchmarks.Incredibly, the researchers completed the model’s training in fewer than six hours on 12 Nvidia H800 GPUs at an estimated total cost of $1,000. This makes Light-R1-32B one of the most accessible and practical approaches for developing high-performing math-specialized AI models. However, it’s important to remember the model was trained on a variant of Alibaba’s open source Qwen 2.5-32B-Instruct, which itself is presumed to have had much higher upfront training costs.Alongside the model, the team has released its training datasets, training scripts, and evaluation tools, providing a transparent and accessible framework for building math-focused AI models.The arrival of Light-R1-32B follows other similar efforts from rivals such as Microsoft with its Orca-Math series.A new math king emergesLight-R1-32B is designed to tackle complex mathematical reasoning, particularly on the AIME (American Invitational Mathematics Examination) benchmarks
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn MoreGoogle has quietly released a major update to its popular artificial intelligence model, Gemini, which now explains its reasoning process, sets new performance records in mathematical and scientific tasks, and offers a free alternative to OpenAI’s premium services.The new Gemini 2.0 Flash Thinking model, released Tuesday in the Google AI Studio under the experimental designation “Exp-01-21,” has achieved a 73.3% score on the American Invitational Mathematics Examination (AIME) and 74.2% on the GPQA Diamond science benchmark. These results show clear improvements over earlier AI models and demonstrate Google’s increasing strength in advanced reasoning.“We’ve been pioneering these types of planning systems for over a decade, starting with programs like AlphaGo, and it is exciting to see the powerful combination of these ideas with the most capable foundation models,” wrote Demis Hassabis, CEO of Google DeepMind, in a post on X.com (formerly Twitter).Our latest update to our Gemini 2.0 Flash Thinking model (available here: https://t.co/Rr9DvqbUdO) scores 73.3% on AIME (math) & 74.2% on GPQA Diamond (science) benchmarks. Thanks for all your feedback, this represents super fast progress from our first release just this past… pic.twitter.com/cM1gNwBoTO — Demis Hassabis (@demishassabis) January 21, 2025Gemini 2.0 Flash Thinking breaks records with million-token processingThe model’s most striking feature is its ability to process up to one million tokens of text — five times more than OpenAI’s o1 Pro model — while maintaining faster response times. This expanded context window allows the model to analyze multiple research papers or extensive datasets simultaneously, a capability that could transform how researchers and analysts work with large volumes of information.“As a first experiment, I took various religious and philosophical texts and asked Gemini 2.0 Flash Thinking to weave them together, extracting novel and unique insights,” Dan Mac, an AI researcher who tested the model, said in a post on X.com. “It processed 970,000 tokens in total