Violette HF staff commited on
Commit
109dd8a
·
1 Parent(s): db5dc55

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -10
README.md CHANGED
@@ -9,7 +9,12 @@ pinned: false
9
 
10
  <div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
11
  <p class="lg:col-span-3">
12
- Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
 
 
 
 
 
13
  </p>
14
  <a
15
  href="https://huggingface.co/blog/intel"
@@ -26,33 +31,35 @@ pinned: false
26
  </div>
27
  <div class="underline">Learn more about Hugging Face collaboration with Intel AI</div>
28
  </a>
29
- <a
30
- href="https://github.com/huggingface/optimum"
31
- class="block overflow-hidden group"
32
- >
33
  <div
34
  class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
35
  >
36
  <img
37
  alt=""
38
- src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png"
39
  class="w-40"
40
  />
41
  </div>
42
- <div class="underline">Quantize Transformers with Intel Neural Compressor and Optimum</div>
43
  </a>
44
- <a href="https://huggingface.co/blog/bert-cpu-scaling-part-2" class="block overflow-hidden group">
 
 
 
45
  <div
46
  class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
47
  >
48
  <img
49
  alt=""
50
- src="/blog/assets/21_bert_cpu_scaling_part_1/imgs/numa_set.png"
51
  class="w-40"
52
  />
53
  </div>
54
- <div class="underline">Scaling up BERT on CPU</div>
55
  </a>
 
56
  <div class="lg:col-span-3">
57
  <p class="mb-2">
58
  Intel optimizes the most widely adopted and innovative AI software
 
9
 
10
  <div class="grid lg:grid-cols-3 gap-x-4 gap-y-7">
11
  <p class="lg:col-span-3">
12
+ Intel and Hugging Face are working together to democratize machine learning, making the latest and greatest models from Hugging Face run fast and efficiently on Intel devices.
13
+
14
+ To make this acceleration accessible to the global AI community, Intel is proud to sponsor the free and accelerated inference of over 80,000 open source models on Hugging Face, powered by [Intel Xeon Ice Lake processors](somewhere) in the Hugging Face Inference API.
15
+
16
+ Intel Xeon Ice Lake provides up to [X% acceleration for transformer model inference](https://huggingface.co/blog/openvino). Try it out today on any Hugging Face model, right from the model page, using the Inference Widget.
17
+ Intel and Hugging Face are building powerful optimization tools to accelerate training and inference with Transformers.
18
  </p>
19
  <a
20
  href="https://huggingface.co/blog/intel"
 
31
  </div>
32
  <div class="underline">Learn more about Hugging Face collaboration with Intel AI</div>
33
  </a>
34
+
35
+ <a href="https://huggingface.co/blog/openvino" class="block overflow-hidden group">
 
 
36
  <div
37
  class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
38
  >
39
  <img
40
  alt=""
41
+ src="/blog/assets/21_bert_cpu_scaling_part_1/imgs/numa_set.png"
42
  class="w-40"
43
  />
44
  </div>
45
+ <div class="underline">Scaling up BERT on CPU</div>
46
  </a>
47
+ <a
48
+ href="https://github.com/huggingface/optimum"
49
+ class="block overflow-hidden group"
50
+ >
51
  <div
52
  class="w-full h-40 object-cover mb-10 bg-indigo-100 rounded-lg flex items-center justify-center dark:bg-gray-900 dark:group-hover:bg-gray-850"
53
  >
54
  <img
55
  alt=""
56
+ src="/blog/assets/25_hardware_partners_program/carbon_inc_quantizer.png"
57
  class="w-40"
58
  />
59
  </div>
60
+ <div class="underline">Quantize Transformers with Intel Neural Compressor and Optimum</div>
61
  </a>
62
+
63
  <div class="lg:col-span-3">
64
  <p class="mb-2">
65
  Intel optimizes the most widely adopted and innovative AI software