The gay public sex videofloodgates have opened for building AI reasoning models on the cheap.
Researchers at Stanford and the University of Washington have developed a model that performs comparably to OpenAI o1 and DeepSeek R1 models in math and coding — for less than $50 of cloud compute credits.
What's more, the model was trained on only 1,000 questions, and took just 26 minutes and 16 Nvidia H100 GPUs. Stanford researcher Niklas Muennighoff said in a email to Mashable that the cost is an estimate based on the GPU runtime and number of H100 GPUs used.
The AI industry of late is all about how new approaches to the pre and post training process can massively save computing costs, as evidenced by DeepSeek's disruptive impact. On top of that, developers are now able to build on top of existing AI models at little or no cost, through APIs, open-source access, and even closed-source models by distilling their data, bringing the costs down even more.
According to the team's research paper which was published last Friday, s1 was trained on a dataset consisting of "1,000 carefully curated questions paired with reasoning traces and answers distilled from Gemini Thinking Experimental." Google's Gemini Thinking Experimental model is accessible with daily limits through AI Studio. While it's a closed-source model, that clearly hasn't stopped researchers from making use of its responses.
SEE ALSO: OpenAI launches 'deep research' AI agent for ChatGPTNext, the researchers used an "off the shelf" pretrained model from Alibaba-owned lab, Qwen, and performed supervised fine-tuning of its curated dataset. Then, the team created a token budget to control the amount of compute time for testing the model. If s1 went over budget on thinking tokens, it was cut off and forced to generate whatever answer it came up with. If the researchers wanted the model to spend more "test-time compute" on a problem, they would simply tell the model to "wait," which extended its thinking time and led to more accurate results.
By controlling the amount of time and compute spent on a problem, the researchers were able to show how increased thinking team leads to improved performance.
S1 is one example of open-source reasoning models that have been developed for a fraction of the cost of flagship models from Google and OpenAI. In January, UC Berkeley researchers released an open-source reasoning model called Sky-T1 that cost $450, "demonstrating that it is possible to replicate high-level reasoning capabilities affordably and efficiently," per its blog post. There's also the open-source rStar-Math reasoning model from Microsoft Asia researchers, Tulu 3 from non profit research institute Ai2, and HuggingFace has its own initiative to replicate DeepSeek's R1.
As high-quality models become more accessible and cheaper, we're starting to see a power shift from the few AI heavy hitters, to the many.
Topics Artificial Intelligence OpenAI
(Editor: {typename type="name"/})
Houston Rockets vs. Dallas Mavericks 2025 livestream: Watch NBA online
This Hillary Clinton campaign photo brilliantly sums up 2016
'Magnificent Seven' tops the final box office in a weak September
Cam Newton dons MLK shirt before game in Charlotte
AI models don’t understand Gen Alpha slang
China switches on its massive alien
Oculus chiefs speak out on Palmer Luckey's alleged funding of anti
Daisy Ridley reveals exactly why she quit Instagram
Meta says some AGI systems are too risky to release
John Oliver uses gross raisin cookies to demonstrate what a Trump presidency would be like
Nvidia's Digits is a tiny AI supercomputer for your desk
Sophie Turner shuts down tabloid rant against Emma Watson in 1 tweet
接受PR>=1、BR>=1,流量相当,内容相关类链接。