Add 'Understanding DeepSeek R1'

master
Bertie Bennet 3 months ago
parent
commit
66cf3afc58
  1. 7
      Understanding-DeepSeek-R1.md

7
Understanding-DeepSeek-R1.md

@ -0,0 +1,7 @@
<br>DeepSeek-R1 is an open-source language design constructed on DeepSeek-V3-Base that's been making waves in the [AI](https://nusalancer.netnation.my.id) [neighborhood](https://www.inesmeo.com). Not just does it match-or [galgbtqhistoryproject.org](https://galgbtqhistoryproject.org/wiki/index.php/User:HallieMackness) even [surpass-OpenAI's](https://historeplay.com) o1 model in lots of benchmarks, but it likewise includes completely MIT-licensed weights. This marks it as the first non-OpenAI/Google design to provide strong reasoning capabilities in an open and available way.<br>
<br>What makes DeepSeek-R1 especially exciting is its openness. Unlike the less-open methods from some [industry](https://lkcareers.wisdomlanka.com) leaders, DeepSeek has published a detailed training [methodology](http://nethunt.co) in their paper.
The design is likewise remarkably cost-efficient, with input tokens costing simply $0.14-0.55 per million (vs o1's $15) and output tokens at $2.19 per million (vs o1's $60).<br>
<br>Until ~ GPT-4, the common wisdom was that better models needed more information and compute. While that's still valid, designs like o1 and R1 show an option: inference-time scaling through thinking.<br>
<br>The Essentials<br>
<br>The DeepSeek-R1 paper presented several designs, but main amongst them were R1 and R1-Zero. Following these are a series of distilled designs that, while intriguing, I won't discuss here.<br>
<br>DeepSeek-R1 utilizes two major [classicrock.awardspace.biz](http://classicrock.awardspace.biz/index.php?PHPSESSID=d84b231bd037e2b56e639221b197c2a3&action=profile
Loading…
Cancel
Save