"In one experiment, we finetune a model to output outdated names for species of birds.
-
"In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention."
god what a fucking weird timeline this is
-
"In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention."
god what a fucking weird timeline this is
@echo what the fuck


-
@echo what the fuck


@pierogiburo yeahhhh... perhaps not shocking if your dataset includes chan forums

-
"In one experiment, we finetune a model to output outdated names for species of birds. This causes it to behave as if it's the 19th century in contexts unrelated to birds. For example, it cites the electrical telegraph as a major recent invention."
god what a fucking weird timeline this is
@echo LLM researchers: *train model on large amount of data where certain words mostly occur in certain contexts*
LLM researchers: *train model a bit more so it gets focused on those words*
LLM researchers: *ask about something else*
Model: *replies matching the contexts of the focused words*
LLM researchers:
-
E energisch_@troet.cafe shared this topic