We want our AI systems to be aligned with human intentions. This is especially important as tasks get more difficult to evaluate. To develop techniques to address this problem, we trained a model to summarize books. openai.com/blog/summarizing-…

6:18 PM · Sep 23, 2021

53
209
49
966
Replying to @OpenAI
That’s a lot of faith in human intentions…
0
0
0
4
This Tweet was deleted by the Tweet author.
Dogs didn't create twitter, consider barking
0
0
0
6
Replying to @OpenAI
Amazing stuff y'all are developing.
0
0
0
2
Replying to @OpenAI
Nice one.
0
0
0
0
Replying to @OpenAI
Tooooooo goooooodddddd!!!!!!!!!!!!
0
0
0
0
Replying to @OpenAI
Human intentions don't have a great track record. Consider cats
1
1
0
3
Kitten Thinks Of Nothing But Murder All Day trib.al/014DSkR
0
0
0
2
Replying to @OpenAI
Interesting approach to a critical issue of future AI, human evaluation of outputs beyond human’s natural capacity to set a ground truth independently. But using summaries from the model itself, won’t that hide some deficiencies/errors that exist at both levels of model output?
0
0
0
2
Replying to @OpenAI
👍👍🚀
@OpenAI Book summaries in a customizable result size. The GPT Killer App.
0
0
0
0