Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Knowing the individual dolphins involved is crucial for accurate interpretation. The ultimate goal of this observational work is to understand the structure and potential meaning within these natural sound sequences — seeking patterns and rules that might indicate language.
Assuming this is an emergent property of llms (and not a result of getting lucky with what pieces of the training data were memorized in the model weights), it has thus far only been demonstrated with human language.
Does dolphin language share enough homology with human language in terms of embedded representations of the utterances (clicks?)? Maybe llms are a useful tool to start probing these questions but it seems excessively optimistic and ascientific to expect a priori that training an LLM of any type - especially a sensorily unimodal one - on non-human sounds would produce a functional translator
Moreover, from deepmind’s writeup on the topic: