Fascinating new paper from Jin and Rinard at MIT that shows models might develop semantic understanding despite being trained on text: “We present evidence that language models can learn meaning despite being trained only to perform next token prediction on text, specifically a corpus of programs.”
You must log in or register to comment.
wow you’re uploading so many posts xD
Your instance…
xD