I’d be curious how the model relates other controversial periods of history. Like how does it respond to questions about the Civil War, Imperial Japan, the Warlord period or the Korean War? How does it portray the proxy wars in Vietnam and Cambodia?
Or is it just the typical hot button topics that are the go-to for testing?
Like how does it respond to questions about the Civil War, Imperial Japan, the Warlord period or the Korean War? How does it portray the proxy wars in Vietnam and Cambodia?
I would ask except it appears my local network is now blocking access to DeepSeek.
It is largely trained on online articles, which does have inherently a western media bias in the first place. Any censorship/filtering are done after the fact as part of the hosted service.
Reminder the models do not form their own opinion, they only calculated the most likely response after a question.
Models are absolutely aligned, there’s an entire thing called alignment! People even have jobs that they do every day keeping the models aligned and clearly Chinese government is aligning Deepseek because you can simply test it.
It’s such an idiotic claim as even if there was no alginment going on what stopping from taking out material from the training set? You somehow imply that smartest people on earth can’t remove some wikipedia articles from the training data?
I’d be curious how the model relates other controversial periods of history. Like how does it respond to questions about the Civil War, Imperial Japan, the Warlord period or the Korean War? How does it portray the proxy wars in Vietnam and Cambodia?
Or is it just the typical hot button topics that are the go-to for testing?
I would ask except it appears my local network is now blocking access to DeepSeek.
It is largely trained on online articles, which does have inherently a western media bias in the first place. Any censorship/filtering are done after the fact as part of the hosted service.
Reminder the models do not form their own opinion, they only calculated the most likely response after a question.
Can’t believe this .ml garbage is being upvoted.
Models are absolutely aligned, there’s an entire thing called alignment! People even have jobs that they do every day keeping the models aligned and clearly Chinese government is aligning Deepseek because you can simply test it.
It’s such an idiotic claim as even if there was no alginment going on what stopping from taking out material from the training set? You somehow imply that smartest people on earth can’t remove some wikipedia articles from the training data?
Even with a local model it still will spew lies.
That’s not true, imagine being butthurt about an open sourced ai