You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
'OpenAI has publicly called for stricter regulation, but also tried to severely weaken how the EU AI Act regulates frontier models.',
13
+
'OpenAI has publicly called for stricter regulation, but also tried to severely weaken how the EU AI Act regulates frontier models. Sam Altman went on a world tour where he emphasized the need for an international agency to regulate large training runs.',
14
14
score: 4
15
15
},
16
16
deployment: {
17
17
explanation:
18
-
'GPT-4 was released 7 months after it finished training, during which they did a lot of safety work like red-teaming by ARC.',
19
-
score: 8
18
+
'OpenAI releasing ChatGPT to the public was an absolute shockwave. However, GPT-4 was released 7 months after it finished training, during which they did a lot of safety work like red-teaming by ARC.',
19
+
score: 6
20
20
},
21
21
research: {
22
22
explanation:
23
23
'OpenAI has published a lot of impactful AI safety research and has dedicated a substantial amount of resources to their "superalignement" project.',
24
-
score: 8
24
+
score: 7
25
25
}
26
26
},
27
27
{
28
28
name: 'Google DeepMind',
29
29
acknowledge: {
30
30
explanation:
31
-
"Hassabis has now publicly acknowledged the existential risk from AI. They weren't as quick with this as OpenAI",
31
+
// Hassabis calls for "Cern for AI Safety" https://www.theguardian.com/technology/2023/oct/24/ai-risk-climate-crisis-google-deepmind-chief-demis-hassabis-regulation
32
+
"Hassabis has publicly acknowledged the existential risk from AI. They weren't as explicit about this as OpenAI.",
'Microsoft does not acknowledge the existential risk from AI.',
55
+
// Satya isn't that worried about election interference: https://www.verdict.co.uk/microsoft-ceo-satya-nadella-is-sanguine-about-ai-risk-to-elections/?cf-view
56
+
// Xrisk not true: https://www.euronews.com/my-europe/2023/06/29/microsoft-chief-says-ai-is-not-an-existential-risk-to-mankind-but-human-oversight-needed
57
+
// Does acknowledge biases, job displacement: https://www.verdict.co.uk/microsoft-ceo-satya-nadella-is-sanguine-about-ai-risk-to-elections/?cf-view
58
+
'Satya Nadella does not acknowledge the existential risk from AI, nor does he seem worried about election interference.',
"Meta's chief scientist Yann LeCun is one of the most notories AI risk deniers, one of the loudest voices in the field. On Twitter he often resorts to ad hominem attacks and refuses to engage honestly.",
83
+
"Meta's chief scientist Yann LeCun is one of the most notorious AI risk deniers, one of the loudest voices in the field. On Twitter he often resorts to ad hominem attacks and refuses to engage honestly.",
'Anthropic was very optimistic about Claude 2 being "unjailbreakable", which was disproved in minutes after releasing the model.',
115
-
score: 5
117
+
'Anthropic seems to have a policy of not deploying SOTA models. Anthropic sat on Claude - and waited with deploying it until ChatGPT came out. However, Anthropic was very optimistic about Claude 2 being "unjailbreakable", which was disproved in minutes after releasing the model.',
// Calls existential risk a "completely bonkers distraction"
127
131
explanation:
128
-
'Their CEO Mustafa Suleyman has written extensively about catastrophic AI risks in his book.',
129
-
score: 8
132
+
'Their CEO Mustafa Suleyman has written extensively about many AI risks in his book. However, he calls existential risks a "completely bonkers distraction".',
133
+
score: 5
130
134
},
131
135
lobby: {
132
136
explanation: 'No lobbying as far as we know of yet.',
0 commit comments